WO2021223709A1 - Systems and methods for barcode decoding - Google Patents

Systems and methods for barcode decoding Download PDF

Info

Publication number
WO2021223709A1
WO2021223709A1 PCT/CN2021/091910 CN2021091910W WO2021223709A1 WO 2021223709 A1 WO2021223709 A1 WO 2021223709A1 CN 2021091910 W CN2021091910 W CN 2021091910W WO 2021223709 A1 WO2021223709 A1 WO 2021223709A1
Authority
WO
WIPO (PCT)
Prior art keywords
symbol
determining
character
boundary
row
Prior art date
Application number
PCT/CN2021/091910
Other languages
French (fr)
Inventor
Wenhui Li
Shijie MA
Original Assignee
Zhejiang Huaray Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huaray Technology Co., Ltd. filed Critical Zhejiang Huaray Technology Co., Ltd.
Priority to JP2022566603A priority Critical patent/JP7481494B2/en
Priority to KR1020227040154A priority patent/KR20230002813A/en
Publication of WO2021223709A1 publication Critical patent/WO2021223709A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps

Definitions

  • This disclosure generally relates to image processing, and more particularly, relates to systems and methods for decoding a barcode in an image.
  • Barcode symbols such as portable data file (PDF) 417 barcodes are widely used in daily lives, such as social activities, certificates, management, transportation, payment, etc.
  • a barcode symbol has a series of bars and spaces encoding data of high-density.
  • the barcode symbol can be read from an image using a scanner, such as a camera. After the barcode symbol is read from the image, the data encoded in the barcode symbol can be decoded using a processing device.
  • errors usually occur due to factors such as image distortion, uneven light, etc. Therefore, it is desirable to provide systems and methods for decoding the barcode symbol accurately and efficiently.
  • a system may comprise at least one storage device storing a set of instructions; and at least one processor configured to communicate with the at least one storage device.
  • the at least one processor is directed to perform operations including obtaining a symbol image of a symbol including a plurality of symbol characters in a symbol region; determining a plurality of row lines along a length direction of the symbol; determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters, each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters; determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and for each of the plurality of symbol characters, determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and decoding
  • a method is provided.
  • the method may be implemented on a computing device having a processor and a computer-readable storage device.
  • the method may comprise obtaining a symbol image of a symbol including a plurality of symbol characters in a symbol region; determining a plurality of row lines along a length direction of the symbol; determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters, each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters; determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and for each of the plurality of symbol characters, determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.
  • a non-transitory readable medium comprises at least one set of instructions, wherein when executed by at least one processor of a computing device, the at least one set of instructions directs the at least one processor to perform a method.
  • the method may comprise obtaining a symbol image of a symbol including a plurality of symbol characters in a symbol region; determining a plurality of row lines along a length direction of the symbol; determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters, each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters; determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and for each of the plurality of symbol characters, determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.
  • determining, based on the plurality of row lines, the plurality of column boundaries among the plurality of symbol characters includes determining a width of a reference symbol character associated with the plurality of symbol characters; determining a reference width range based on the width of the reference symbol character; and determining the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range.
  • the reference symbol character includes a start symbol character or an end symbol character.
  • determining the width of the reference symbol character includes obtaining a preset codeword string associated with the reference symbol character; for at least one of the plurality of row lines, identifying at least one reference line segment based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with the preset codeword string; and designating a length of the at least one reference line segment as the width of the reference symbol character.
  • determining, based on the plurality of row lines, the plurality of row boundaries among the plurality of symbol characters includes identifying the plurality of row boundaries among the plurality of symbol characters from the plurality of row lines based on boundary characteristics between adjacent symbol characters.
  • decoding the symbol character based on grey values associated with the character region corresponding to the symbol character includes dividing, along a row direction, the character region corresponding to the symbol character into a plurality of blocks; determining a global gray value of each of the plurality of blocks; determining a contrast value of the symbol character based on the global grey values of the plurality of blocks; and determining a codeword corresponding to the symbol character based on the contrast value.
  • determining the contrast value of the symbol character based on the global grey values of the plurality of blocks includes determining a first ratio of grey values of blocks of a first type in the character region to a count of the blocks of the first type; determining a second ratio of grey values of blocks of a second type in the character region to a count of the blocks of the second type; and determining the contrast value of the symbol character based on a difference value between the first ratio and the second ratio.
  • the operations further including determining a start boundary, an end boundary, an upper boundary, and a lower boundary of the symbol region.
  • determining the start boundary of the symbol region includes for at least one of the plurality of row lines, identifying at least one end point of a start symbol character based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with a start codeword string; and determining the start boundary of the symbol region based on the at least one end point of the start symbol character.
  • determining the end boundary of the symbol region includes for at least one of the plurality of row lines, identifying at least one start point of an end symbol character based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with an end codeword string; and determining the end boundary of the symbol region based on the at least one start point of the end symbol character.
  • determining the upper boundary of the symbol region includes for each of the plurality of column boundaries, determining a plurality of intersections of the plurality of row lines and the column boundary; performing an upward traverse until an upper pixel of a first intersection is identified, wherein the upper pixel and the first intersection satisfy upper boundary characteristics; and determining the upper boundary of the symbol region based on the first intersection of each of the plurality of column boundaries.
  • determining the lower boundary of the symbol region includes: for each of the plurality of column boundaries, determining a plurality of intersections of the plurality of row lines and the column boundary; performing a downward traverse until a lower pixel of a second intersection is identified, wherein the lower pixel and the second intersection satisfy lower boundary characteristics; and determining the lower boundary of the symbol region based on the second intersection of each of the plurality of column boundaries.
  • FIG. 1 is a schematic diagram illustrating an exemplary image processing system according to some embodiments of the present disclosure
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure
  • FIG. 3 is a schematic diagram illustrating exemplary components of an exemplary terminal according to some embodiments of the present disclosure
  • FIG. 4 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure.
  • FIG. 5 is a flow chart illustrating an exemplary process for decoding symbol characters of a symbol in a symbol image according to some embodiments of the present disclosure
  • FIGs. 6 and 7 illustrate an exemplary PDF 417 barcode according to some embodiments of the present disclosure
  • FIG. 8 is a schematic diagram illustrating exemplary row lines in a symbol image according to some embodiments of the present disclosure.
  • FIG. 9 is a partial enlarged view of a PDF 417 barcode according to some embodiments of the present disclosure.
  • FIG. 10 is a schematic diagram illustrating an exemplary row boundary between two adjacent symbol characters according to some embodiments of the present disclosure.
  • FIG. 11 illustrates an exemplary symbol image according to some embodiments of the present disclosure
  • FIG. 12 is a flow chart illustrating an exemplary process for determining a plurality of column boundaries among a plurality of symbol characters in a symbol region of a symbol according to some embodiments of the present disclosure
  • FIG. 13 illustrates an exemplary symbol image according to some embodiments of the present disclosure
  • FIG. 14 is a flow chart illustrating an exemplary process for decoding a symbol character based on grey values associated with a character region corresponding to the symbol character according to some embodiments of the present disclosure
  • FIG. 15 is a flow chart illustrating an exemplary process for decoding symbol characters of a symbol in a symbol image according to some embodiments of the present disclosure.
  • FIG. 16 a schematic diagram of an exemplary PDF 417 barcode according to some embodiments of the present disclosure.
  • modules of the system may be referred to in various ways according to some embodiments of the present disclosure, however, any number of different modules may be used and operated in a client terminal and/or a server. These modules are intended to be illustrative, not intended to limit the scope of the present disclosure. Different modules may be used in different aspects of the system and method.
  • flow charts are used to illustrate the operations performed by the system. It is to be expressly understood, the operations above or below may or may not be implemented in order. Conversely, the operations may be performed in inverted order, or simultaneously. Besides, one or more other operations may be added to the flowcharts, or one or more operations may be omitted from the flowchart.
  • the system may obtain a symbol image including a plurality of symbol characters in a symbol region.
  • the symbol image may be an image of a symbol (e.g., a portable data file (PDF) 417 barcode) .
  • PDF portable data file
  • a plurality of row lines along a length direction of the symbol may be determined. Each row line may traverse pixels in a same row in the symbol region.
  • the system may also determine, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters and a plurality of row boundaries among the plurality of symbol characters.
  • the system may determine a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries.
  • the system may further decode the symbol character based on grey values associated with the character region corresponding to the symbol character.
  • the position of each of the plurality of symbol characters may be determined more accurately, a codeword corresponding to the symbol character may be determined more efficiently and accurately, and errors in the decoding process caused by factors such as image distortion, uneven light, etc., may be reduced or eliminated, thus improving the effectiveness and accuracy of the decoding process.
  • FIG. 1 is a schematic diagram illustrating an exemplary image processing system according to some embodiments of the present disclosure.
  • the image processing system 100 may process an image or a video composed of a plurality of images, and extract data from the image or the video.
  • the image processing system 100 may include an image source 101, a processing device 104, a buffer manager 105, a buffer 106, a transmitter 107, a terminal 108 (or a plurality of terminals 108) , a network 112, and a network storage device 113 (or a plurality of network storages 113) .
  • the image source 101 may provide an image or a video include at least one image (also referred to as video frame) to a user of the terminal 108 through the network 112.
  • the image source 101 may include a scanner 102 and/or a media server 103.
  • the scanner 102 may be able to capture an image or a video including at least one image.
  • the image may be a symbol image.
  • the symbol image may be an image of a symbol.
  • the symbol image may be a still image or a video frame obtained from a video.
  • the symbol image may be a two-dimensional (2D) image or a three-dimensional (3D) image.
  • the scanner 102 may be a laser scanner, an optical scanner, etc.
  • the optical scanner may be a camera.
  • the camera may be, for example, a digital camera, a video camera, a security camera, a web camera, a smartphone, a tablet, a laptop, a video gaming console equipped with a web camera, a camera with multiple lenses, etc.
  • the camera may include a lens, a shutter, a sensor, a processing element, and a storage element.
  • the lens may be an optical element that focuses a light beam by means of refraction to form an image.
  • the lens may be configured to intake a target object (e.g., a barcode on a card, a paper, a bag, a package, etc. ) .
  • An aperture of the lens may define a size of a hole through which light passes to reach the sensor.
  • the aperture may be adjustable to adjust the amount of light that passes through the lens.
  • the focal length of the lens may be adjustable to adjust the coverage of the camera.
  • the shutter may be opened to allow light through the lens when an image is captured.
  • the shutter may be controlled manually or automatically by the processing element.
  • the sensor may be configured to receive light passing through the lens and transform light signals of the received light into electrical signals.
  • the sensor may include charge coupled device (CCD) and complementary metal-oxide semiconductor (CMOS) .
  • CMOS complementary metal-oxide semiconductor
  • the sensor may be in communication with the logic circuits, and may be configured to detect the target object using the lens and transform the received light from the target object into electronic signals.
  • the processing element may be configured to process data and/or information relating to the camera and/or control one or more components (e.g., the lens, the shutter) in the camera. For example, the processing element may automatically determine values of exposure parameters of the camera such as an exposure time, an exposure gain, and an aperture. The processing element may also adjust quality (e.g., sharpness, contrast, noise level, etc. ) of images taken by the camera
  • the processing element may be local or remote.
  • the processing element may communicate with the camera via a network.
  • the processing element may be integrated into the camera.
  • the storage element may store data, instructions, and/or any other information.
  • the storage element may store data obtained from the processing element.
  • the storage element may store captured images.
  • the storage element may store data and/or instructions that a processing device may execute or use to perform exemplary methods described in the present disclosure.
  • the storage element may include a mass storage, removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
  • Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
  • Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • Exemplary volatile read-and-write memory may include a random-access memory (RAM) .
  • RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
  • Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
  • MROM mask ROM
  • PROM programmable ROM
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM compact disk ROM
  • digital versatile disk ROM etc.
  • the media sever 103 may be a server (e.g., a computer or a group of computers) for storing or providing images or videos including a plurality of images.
  • the media server 103 may also include an image processing element (not shown) configured to process the images using exemplary methods introduced in the present disclosure.
  • the image source 101 may send the images or videos to the processing device 104.
  • the processing device 104 may process the images or videos.
  • the images or videos may include a symbol image.
  • the symbol image may be an image of a symbol.
  • the symbol may be a barcode (e.g., a portable data file (PDF) 417, a code 16K, a code 49, etc. ) .
  • PDF portable data file
  • the symbol may include a plurality of symbol characters in a symbol region.
  • the symbol character may refer to a minimum unit for encoding data in the symbol.
  • the symbol region may refer to a region corresponding to at least a portion of the symbol where the plurality of symbol characters are located.
  • the processing device 104 may decode the symbol and generate decoded data corresponding to the symbol.
  • the processing device 104 may be a single server or a server group.
  • the server group may be centralized or distributed.
  • the processing device 104 may be local or remote.
  • the processing device 104 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • the processing device 104 may be implemented by a computing device 200 having one or more components as illustrated in FIG. 2.
  • the images, videos, and/or decoded data corresponding to the images or the videos may be stored in the buffer 106.
  • the buffer 106 may be managed by the buffer manager 105.
  • the buffer 106 may be a storage device for buffering the images, videos, and/or decoded data corresponding to the images or the videos to be transmitted through the network 112. It may be a remote device from the image source 101 or a local device interpreted in the image source 101, such as the storage medium of the camera.
  • the buffer 106 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
  • Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • Exemplary volatile read-and-write memory may include a random-access memory (RAM) , such as a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) .
  • DRAM dynamic RAM
  • DDR SDRAM double date rate synchronous dynamic RAM
  • SRAM static RAM
  • T-RAM thyristor RAM
  • Z-RAM zero-capacitor RAM
  • Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
  • MROM mask ROM
  • PROM programmable ROM
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM compact disk ROM
  • digital versatile disk ROM etc.
  • the transmitter 107 may transmit the images, videos, and/or decoded data corresponding to the images or the videos buffered in the buffer 106 to the network 112.
  • the transmitter 107 may transmit the images, videos, and/or decoded data corresponding to the images or the videos in response to instructions sent from the video source 101, the buffer manager 105, the terminal 108, or the like, or a combination thereof.
  • the transmitter 107 may spontaneously transmit the images, videos, and/or decoded data corresponding to the images or the videos stored in the buffer 106.
  • the transmitter 107 may transmit the images, videos, and/or decoded data corresponding to the images or the videos to the terminal 108 through the network 112.
  • the terminal 108 may receive the transmitted images, videos, and/or decoded data corresponding to the images or the videos through the network 112. In some embodiments, the terminal 108 may display the images, videos, and/or decoded data corresponding to the images or the videos to a user or perform further operations such as payment, identity authentication, registration, etc.
  • the terminal 108 may be various in forms.
  • the terminal 108 may include a mobile device 109, a tablet computer 110, a laptop computer 111, or the like, or any combination thereof.
  • the mobile device 109 may include, a wearable device, a mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
  • the wearable device may include a bracelet, footgear, eyeglasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof.
  • the mobile device may include a mobile phone, a personal digital assistance (PDA) , a laptop, a tablet computer, a desktop, or the like, or any combination thereof.
  • the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof.
  • the virtual reality device and/or the augmented reality device may include a Google Glass TM , an Oculus Rift TM , a Hololens TM , a Gear VR TM , etc.
  • the terminal (s) 108 may be part of a processing engine.
  • the network 112 may include any suitable network that can facilitate data transmission.
  • the network 112 may be and/or include a public network (e.g., the Internet) , a private network (e.g., a local area network (LAN) , a wide area network (WAN) ) , a wired network (e.g., an Ethernet network) , a wireless network (e.g., an 802.11 network, a Wi-Fi network) , a cellular network (e.g., a Long Term Evolution (LTE) network) , a frame relay network, a virtual private network ( "VPN" ) , a satellite network, a telephone network, routers, hubs, switches, server computers, and/or any combination thereof.
  • a public network e.g., the Internet
  • a private network e.g., a local area network (LAN) , a wide area network (WAN)
  • a wired network e.g., an Ethernet network
  • the network 112 may include a cable network, a wireline network, a fiber-optic network, a telecommunications network, an intranet, a wireless local area network (WLAN) , a metropolitan area network (MAN) , a public telephone switched network (PSTN) , a Bluetooth TM network, a ZigBee TM network, a near field communication (NFC) network, or the like, or any combination thereof.
  • the network 112 may include one or more network access points.
  • the network 112 may include one or more network storage devices 113.
  • the network storage device 113 may be a device for buffering or caching data transmitted in the network 112.
  • the images, videos, and/or decoded data corresponding to the images or the videos transmitted by the transmitter 107 may be buffered or cashed in one or more network storage devices 113 before being received by the terminal 108.
  • the network storage device 113 may be a server, a hub, a gateway, or the like, or a combination thereof.
  • one or more of the processing device 104, buffer manager 105, buffer 106, and transmitter 107 may be a stand-alone device, or a module integrated into the image source 101 or another stand-alone device.
  • one or more of the processing device 104, buffer manager 105, buffer 106 and transmitter 107 may be integrated into the scanner 102, the media server 103, and/or the terminal 108.
  • the processing device 104, buffer manager 105, buffer 106 and transmitter 107 may be included in a video processing engine which may communicate with the image source 101 through direct wired connection, the network 112, or another network.
  • the processing device 104 may be a stand-alone device (e.g., a computer or a server) , while the buffer manager 105, buffer 106 and transmitter 107 may be included in another stand-alone device.
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure.
  • the computing device 200 may be the media server 103, the processing element of the scanner 102, and/or an electronic device specialized in image processing.
  • the processing device 104 and buffer manager 105 may also be implemented on the computing device 200.
  • the computing device 200 may include a processor 222, a storage 227, an input/output (I/O) 226, and a communication port 225.
  • I/O input/output
  • the processor 222 may execute computer instructions (e.g., program code) and perform functions in accordance with techniques described herein.
  • the processor 222 may include interface circuits and processing circuits therein.
  • the interface circuits may be configured to receive electronic signals from a bus (not shown in FIG. 2) , wherein the electronic signals encode structured data and/or instructions for the processing circuits to process.
  • the processing circuits may conduct logical operations calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus.
  • the computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein.
  • the processor 222 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC) , an application specific integrated circuits (ASICs) , an application-specific instruction-set processor (ASIP) , a central processing unit (CPU) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field programmable gate array (FPGA) , an advanced RISC machine (ARM) , a programmable logic device (PLD) , any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof.
  • RISC reduced instruction set computer
  • ASICs application specific integrated circuits
  • ASIP application-specific instruction-set processor
  • CPU central processing unit
  • processors of the computing device 200 may also include multiple processors, thus operations and/or method steps that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors.
  • the processor of the computing device 200 executes both step A and step B, it should be understood that step A and step B may also be performed by two or more different processors jointly or separately in the computing device 200 (e.g., a first processor executes step A and a second processor executes step B, or the first and second processors jointly execute steps A and B) .
  • the storage 227 may store data/information obtained from the image source 101, the processing device 104, the buffer manager 105, the buffer 106, the transmitter 107, the terminal 108, the network 112, the network storage device 113, and/or any other component of the image processing system 100.
  • the storage 222 may include a mass storage, removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
  • the mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
  • the removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • the volatile read-and-write memory may include a random-access memory (RAM) , which may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
  • the ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
  • MROM mask ROM
  • PROM programmable ROM
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM compact disk ROM
  • digital versatile disk ROM etc.
  • the storage 222 may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure.
  • the storage 222 may store a program for the processing engine (e.g., the media server 103, the processing device 104) to decode a symbol image.
  • the I/O 226 may input and/or output signals, data, information, etc.
  • the I/O 226 may include an input device and an output device.
  • Examples of the input device may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof.
  • Examples of the output device may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof.
  • Examples of the display device may include a liquid crystal display (LCD) , a light-emitting diode (LED) -based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT) , a touch screen, or the like, or a combination thereof.
  • LCD liquid crystal display
  • LED light-emitting diode
  • CRT cathode ray tube
  • the communication port 225 may be connected to a network (e.g., the network 112) to facilitate data communications.
  • the communication port 225 may establish connections between the image source 101, the processing device 104, the buffer manager 105, the buffer 106, the transmitter 107, the terminal 108, the network 112, the network storage device 113, and/or any other component of the image processing system 100.
  • the connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections.
  • the wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof.
  • the wireless connection may include, for example, a Bluetooth TM link, a Wi-Fi TM link, a WiMax TM link, a WLAN link, a ZigBee link, a mobile network link (e.g., 3G, 4G, 5G) , or the like, or a combination thereof.
  • the communication port 225 may be and/or include a standardized communication port, such as RS232, RS485, etc. In some embodiments, the communication port 225 may be a specially designed communication port.
  • FIG. 3 is a schematic diagram illustrating exemplary components of an exemplary terminal according to some embodiments of the present disclosure.
  • the terminal 300 may include a communication platform 320, a display 310, a graphic processing unit (GPU) 330, a central processing unit (CPU) 330, an I/O port 350, a memory 360, and a storage 390.
  • any other suitable component including but not limited to a system bus or a controller (not shown) , may also be included in the terminal 300.
  • a mobile operating system 370 e.g., iOS TM , Android TM , Windows Phone TM
  • one or more applications 380 may be loaded into the memory 360 from the storage 390 in order to be executed by the processor 340.
  • the terminal 300 may be an embodiment of the terminal 108.
  • computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein.
  • a computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device.
  • PC personal computer
  • a computer may also act as a server if appropriately programmed.
  • FIG. 4 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure.
  • the processing device 104 may include an obtaining module 410, a row line determination module 420, a character region determination module 430, and a decoding module 440.
  • the obtaining module 410 may obtain data/information.
  • the obtaining module 410 may obtain a symbol image of a symbol including a plurality of symbol characters.
  • the symbol in the symbol image may be a barcode.
  • Exemplary barcodes may include code 16K, code 49, PDF 417, micro PDF 417, code one, maxicode, quick response (QR) code, data matrix, HanXin code, grid matrix, etc.
  • the obtained data/information may further include processed results, user instructions, algorithms, program codes, or the like, or a combination thereof.
  • the row line determination module 420 may determine a plurality of row lines in the symbol image along a length direction of the symbol.
  • the processing device 104 may identify edges of the symbol. The edges of the symbol may be identified based on the pixels in the symbol image.
  • the processing device 104 may adjust the size, position, and/or orientation of a positioning box dynamically until edges of the symbol coincide with or substantially coincide with edges of the positioning box.
  • each of the plurality of row lines may be determined by connecting pixels in a same row in the positioning box.
  • the character region determination module 430 may determine a character region corresponding to the symbol character for each of the plurality of symbol characters. In some embodiments, the character region corresponding to the symbol character for each of the plurality of symbol characters may be determined based on the plurality of column boundaries and the plurality of row boundaries in a symbol region including the plurality of symbol characters. The character region determination module 430 may obtain the plurality of row lines from the row line determination module 420, and determine the plurality of column boundaries and the plurality of row boundaries based on the plurality of row lines. Details regarding the determination of the plurality of column boundaries and the plurality of row boundaries can be found elsewhere in the present disclosure, for example, FIGs. 5, 12, and 15, and relevant descriptions thereof.
  • the decoding module 440 may decode each symbol character based on grey values associated with the character region corresponding to the symbol character.
  • the decoding module 440 may divide the character region into a plurality of blocks.
  • Each of the plurality of blocks may include one or more pixels (e.g., 1 ⁇ 4 pixels (i.e., one pixel in a row and 4 pixels in a column) ) .
  • Grey values of the one or more pixels of the block may be obtained.
  • the global gray value of the block may be an overall representation of the grey values of the one or more pixels of the block.
  • the decoding module 440 may determine a contrast value of the symbol character corresponding to the character region based on the global grey values of the plurality of blocks. The decoding module 440 may determine a codeword corresponding to the symbol character based on the contrast value. In some embodiments, the decoding module 440 may obtain a plurality of preset codewords (e.g., 2787 codewords) . Each of the plurality of codewords may correspond to a predetermined codeword string. A reference contrast value of each of the plurality of preset codewords may be determined based on the corresponding predetermined codeword string.
  • a plurality of preset codewords e.g., 2787 codewords
  • the processing device 104 may determine a similarity value between each of the plurality of preset codewords and the symbol character based on reference contrast value of each codeword and the contrast value of the symbol character.
  • the decoding module 440 may determine the codeword corresponding to the symbol character based on the similarity values.
  • the codeword corresponding to the symbol character may be or include decoded data (e.g., numbers, text, vectors, etc. ) corresponding to the symbol in the symbol image.
  • the modules in the processing device 104 may be connected to or communicated with each other via a wired connection or a wireless connection.
  • the wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof.
  • the wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof.
  • LAN Local Area Network
  • WAN Wide Area Network
  • NFC Near Field Communication
  • Two or more of the modules may be combined into a single module, and any one of the modules may be divided into two or more units.
  • the processing device 104 may include a storage module (not shown) configured to store information and/or data (e.g., scanning data, images) associated with the above-mentioned modules.
  • FIG. 5 is a flow chart illustrating an exemplary process for decoding symbol characters of a symbol in a symbol image according to some embodiments of the present disclosure.
  • the process 500 may be implemented on the image processing system 100 as illustrated in FIG. 1.
  • the process 500 may be stored in a storage medium (e.g., the network storage device 113, or the storage 227 of the computing device 220) as a form of instructions, and invoked and/or executed by the processing device 104.
  • the operations in the process 500 presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 500 as illustrated in FIG. 5 and described below may not be intended to be limiting.
  • the processing device 104 may obtain a symbol image including a plurality of symbol characters in a symbol region.
  • the symbol image may be an image of a symbol.
  • the symbol image may be obtained from the image source 101 (e.g., the scanner 102 or the media server 103) of the image processing system 100.
  • the symbol in the symbol image may be a barcode.
  • Exemplary barcodes may include code 16K, code 49, PDF 417, micro PDF 417, code one, maxicode, quick response (QR) code, data matrix, HanXin code, grid matrix, etc. The following descriptions are provided, unless otherwise stated expressly, with reference to a PDF 417 barcode for illustration and not intended to be limiting.
  • the PDF 417 barcode may be a symbol with a stacked linear barcode format.
  • an exemplary PDF 417 barcode is illustrated in FIGs. 6 and 7.
  • the PDF 417 barcode may include a start symbol character (also referred to as start pattern) A, a symbol region B, and an end symbol character (also referred to as stop pattern) C.
  • the start symbol character A, the symbol region B, and the end symbol character C may be arranged sequentially along a length direction of the symbol (e.g., as illustrated in FIG. 6) .
  • the symbol region B may be include four boundaries.
  • the four boundaries may include a start boundary (i.e., the boundary that separate the symbol region B from the start symbol character A) , an end boundary (i.e., the boundary that separate the symbol region B from the end symbol character C) , an upper boundary, and a lower boundary.
  • the PDF417 barcode may include a plurality of symbol characters in the symbol region B. Each of the plurality of symbol characters in the symbol region may be represented by a combination of four bars and four spaces. The four bars and four spaces may be arranged alternatively. In some embodiments, the four bars and the four spaces may be constituted by 17 blocks. Each of the four bars and four spaces may correspond to a specific number or count of blocks along the length direction of the symbol. The arrangement of the specific number or count of blocks corresponding to each of the bars and spaces along the length direction of the symbol may be represented as a codeword string corresponding to the symbol character. Data encoded in the symbol character may be decoded based on the codeword string.
  • an exemplary symbol character is provided in FIG. 7.
  • the symbol character 700 may be in a dotted box.
  • the symbol character 700 may include four bars and four spaces.
  • the four bars and the four spaces may be represented by the black portions and the white portions in the dotted box, respectively.
  • the four bars and the four spaces may be constituted by 17 blocks, which are labeled with numbers 1 through 17 above the symbol character 700 as illustrated in FIG. 7.
  • the 17 blocks may have a same width along the length direction of the symbol (e.g., the horizontal direction in FIG. 7) .
  • Each block may have a shape of a stripe.
  • a length direction of each stripe may be perpendicular to the length direction of the symbol (e.g., a width direction of the symbol) .
  • a first bar corresponds to 5 blocks
  • a first space corresponds to 1 block
  • a second bar corresponds to 1 blocks
  • a second space corresponds to 1 block
  • a third bar corresponds to 1 blocks
  • a third space corresponds to 1 block
  • a fourth bar corresponds to 2 blocks
  • a fourth space corresponds to 5 block.
  • a width of each of the four bars and spaces may be labeled blow the corresponding bar or space in FIG. 7.
  • the symbol character 700 represented by the bars and spaces may be represented by a codeword string 51111125.
  • Data encoded in the symbol character 700 e.g., a number, text, etc.
  • Each block in the symbol image may correspond to one or more pixels of the symbol image.
  • each block may be represented by 1 ⁇ 4 pixels (i.e., one pixel in a row and 4 pixels in a column as illustrated in FIG. 9) in the symbol image.
  • a direction of the row also referred to as row direction
  • a direction of the column also referred to as column direction
  • this is for illustration purposes, and not intended to be limiting.
  • a block may correspond to any number or count of pixels, such as 1 ⁇ 3 pixels (i.e., one pixel in a row and 3 pixels in a column) , 2 ⁇ 7 pixels (i.e., 2 pixel in a row and 7 pixels in a column) , etc.
  • the PDF 417 barcode may further include a first quiet zone and a second quiet zone (not shown in the figures) .
  • the first quiet zone and the second quiet zone may be located at both sides of the symbol along its length direction.
  • the first quiet zone and/or the second quiet zone may be spaces (i.e., white portions) having preset widths.
  • the processing device 104 may determine a plurality of row lines along the length direction of the symbol.
  • the processing device 104 may determine the length direction of the symbol using a positioning box.
  • the positioning box may be a rectangular box.
  • the processing device 104 may identify edges of the symbol. The edges of the symbol may be identified based on the pixels in the symbol image. For example, the processing device 104 may obtain grey values of all the pixels in the symbol image. Since colors of the bars of the start symbol character, the end symbol character, and the plurality of symbol characters in the symbol are black, grey values of pixels corresponding to the bars may be relatively small (e.g., 0) . The processing device 104 may identify the edges of the symbol based on the grey values of the pixels.
  • the processing device 104 may adjust a size, a position, and/or an orientation of the positioning box based on the identified edges of the symbol.
  • the orientation of the positioning box may be parallel to the length direction of the positioning box.
  • the processing device 104 may adjust the size, position, and/or orientation of the positioning box dynamically until edges of the symbol coincide with or substantially coincide with edges of the positioning box.
  • the orientation of the positioning box may be determined as the length direction of the symbol.
  • the processing device 104 may determine a plurality of row lines.
  • pixels in the positioning box may be in a first plurality of rows and a second plurality of columns.
  • a row direction of each of the first plurality of rows may be parallel to the length direction of the symbol.
  • a column direction of each of the second plurality of columns may be perpendicular to the length direction of the symbol.
  • each of the plurality of row lines may be determined by connecting pixels in a same row in the positioning box.
  • multiple scan lines may be determined.
  • the multiple scan lines may be parallel to the length direction of the symbol. Each scan line may traverse pixels in a same row in the positioning box. The multiple scan lines may be determined as the row lines.
  • At least one of the plurality of row lines may be a straight line. In some embodiments, at least one of the plurality of row lines may be a curved line. Each two neighboring row lines of the plurality of row lines may have a same distance or different distances.
  • FIG. 8 exemplary row lines in a symbol image may be illustrated in FIG. 8.
  • the symbol image 800 may include a symbol 810. Pixels in the symbol image 800 may be in eight rows. Accordingly, eight row lines may be determined by connecting pixels in a same row. As illustrated in FIG. 8, eight row lines L1-L8 may be determined. L1 through L8 may traverse pixels in different rows of the symbol 810.
  • the processing device 104 may determine, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters.
  • the plurality of symbol characters in the symbol region may be in a stacked arrangement.
  • the plurality of symbol characters may be arranged in a multiple rows and columns in the symbol region.
  • a column boundary may refer to a boundary between two adjacent columns of symbol characters.
  • the processing device 104 may determine a plurality of column boundaries among the plurality of symbol characters. Each of the plurality of column boundaries may correspond to two adjacent columns of symbol characters of the plurality of symbol characters. In some embodiments, the plurality of column boundaries may be determined based on the plurality of row lines. In some embodiments, the plurality of column boundaries may be determined by identifying pixels on column boundaries corresponding to two adjacent columns of the plurality of symbol characters from the plurality of row lines.
  • the processing device 104 may determine a width of a reference symbol character associated with the plurality of symbol characters.
  • the reference symbol character may refer to a specific symbol character in the symbol.
  • the reference symbol character may be used to determine a width of a symbol character in the symbol region.
  • the reference symbol character may be a start symbol character (e.g., the start symbol character A as illustrated in FIG. 6) or an end symbol character (e.g., the end symbol character C as illustrated in FIG. 6) .
  • the plurality of symbol characters in the symbol region may have a same width.
  • a width of the start symbol character may be equal to a width of a symbol character in the symbol region.
  • a width of the end symbol character may be proportional to a width of a symbol character in the symbol region.
  • the processing device 104 may identify the start symbol character and the end symbol character from the symbol image.
  • the start symbol character and the end symbol character may be out of the symbol region.
  • the start symbol character may include a start edge and an end edge along the row direction of the symbol.
  • the end edge of the start symbol character may coincide with the start boundary of the symbol region.
  • the processing device 104 may determine the start boundary of the symbol region by determining the end edge of the start symbol character.
  • the end symbol character may also include a start edge and an end edge along the row direction of the symbol.
  • the start edge of the end symbol character may coincide with the end boundary of the symbol region.
  • the processing device 104 may determine the end boundary of the symbol region by determining the start edge of the end symbol character.
  • the plurality of column boundaries may be determined between the start boundary and the end boundary of the symbol region.
  • the processing device 104 may determine a reference width range based on the width of the reference symbol character.
  • the width of the start symbol character may be equal to the width of each symbol character in the symbol region.
  • the width of the end symbol character may be proportional to the width of each symbol character in the symbol region.
  • a width of a symbol character may vary in actual situations due to various factors, for example, distortion of the symbol image. Accordingly, the processing device 104 may determine the reference width range with the various factors taken into consideration.
  • the reference width range may be a range in which a width of a symbol character of the plurality of characters may vary.
  • the reference width range may be determined by increasing the width of the reference symbol character by an increment (e.g., 0.2 millimeters, 0.5 millimeters, 1 millimeter, 1 pixel, 2 pixels) and/or decreasing the width of the reference symbol character by a decrement (e.g., 0.2 millimeters, 0.5 millimeters, 1 millimeter, 1 pixel, 2 pixels) .
  • the increment and/or the decrement may be defined as an error corresponding to the symbol character.
  • the reference width range may be adjustable under different situations, such as different brightness conditions, different image resolutions, different image qualities (e.g., noise levels) , etc.
  • the processing device 104 may determine the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range.
  • a fourth space of a current symbol character may be consecutive to a first bar of an adjacent symbol character along the row direction of the symbol.
  • a column boundary between the two adjacent symbol characters in the row direction may have characteristics (also referred to as column boundary characteristics) that the color of a last block in the fourth space of the current symbol character and the color of a first block in the first bar of the adjacent symbol character may change from white to black along the row direction. Accordingly, grey values of pixels of the blocks may decrease, e.g., from 255 to 0, along the row direction. More descriptions regarding the determination of the plurality of column boundaries can be found elsewhere in the present disclosure, for example, FIG. 12 and the descriptions thereof.
  • the processing device 104 may determine, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters.
  • the processing device 104 may determine a plurality of intersections (e.g., pixels) of the plurality of row lines and the column boundary.
  • the processing device 104 may perform an upward traverse and identify an upper intersection among the plurality of intersections.
  • the upward traverse may refer to an operation for traversing intersections one by one upwards along a column boundary.
  • a pixel above the upper intersection (also referred to as upper pixel) along the column boundary and a pixel subsequent to the upper pixel along the row direction (also referred to as subsequent pixel of the upper pixel) in a same row may be determined.
  • the upper pixel and the subsequent pixel of the upper pixel may be located at different sides of the column boundary.
  • an upper boundary of the symbol region may have characteristics (also referred to as upper boundary characteristics) that an upper pixel and a pixel subsequent to the upper pixel along the row direction do not satisfy the column boundary characteristics, and a reference pixel (e.g., an intersection) blow the upper pixel along the column boundary and a pixel subsequent to the reference pixel along the row direction satisfy the column boundary characteristics.
  • the reference pixel and the pixel subsequent to the reference point may be on the upper boundary of the symbol region.
  • the processing device 104 may determine whether the upper intersection and the subsequent pixel of the upper intersection, and upper pixel and the subsequent pixel of the upper pixel satisfy the column boundary characteristics (i.e., the color of the upper pixel is white and the color of the subsequent pixel of the upper pixel is black, the colors of the two pixels may change from white to black along the row direction) . If the upper pixel and the subsequent pixel of the upper pixel do not satisfy the column boundary characteristics and the upper intersection and a pixel subsequent to the upper intersection along the row direction satisfy the column boundary characteristics (also referred to as the upper pixel and the upper intersection satisfy the upper boundary characteristics) , it may indicate that the upper intersection and the subsequent intersection of the upper pixel may be on the upper boundary of the symbol region.
  • the column boundary characteristics i.e., the color of the upper pixel is white and the color of the subsequent pixel of the upper pixel is black, the colors of the two pixels may change from white to black along the row direction
  • the column boundary characteristics also referred to as the upper pixel and the
  • the RANSAN algorithm may be used to determine the upper boundary of the symbol region based on the upper pixel and the subsequent pixel of the upper pixel corresponding to each of the plurality of column boundaries.
  • the processing device 104 may perform a downward traverse and identify a lower intersection among the plurality of intersections.
  • the downward traverse may refer to an operation for traversing intersections one by one downwards along the column boundary.
  • a pixel beneath the lower intersection (also referred to as lower pixel) along the column boundary and a pixel subsequent to the lower pixel along the row direction (also referred to as subsequent pixel of the lower pixel) in a same row may be determined.
  • the lower pixel and the subsequent pixel of the lower pixel may be located at different sides of the column boundary.
  • a lower boundary of the symbol region may have characteristics (also referred to as lower boundary characteristics) that a lower pixel and a pixel subsequent to the lower pixel along the row direction do not satisfy the column boundary characteristics, and a reference pixel (e.g., an intersection) above the lower pixel along the column boundary and a pixel subsequent to the reference pixel along the row direction satisfy the column boundary characteristics.
  • the reference pixel and the pixel subsequent to the reference point may be on the lower boundary of the symbol region.
  • the processing device 104 may determine whether the lower intersection and the subsequent pixel of the lower intersection, and the lower pixel and the subsequent pixel of the lower pixel satisfy the column boundary characteristics. If the lower pixel and the subsequent pixel of the lower pixel do not satisfy the column boundary characteristics and the low intersection and a pixel subsequent to the lower intersection along the row direction satisfy the column boundary characteristics (also referred to as the lower pixel and the lower intersection satisfy the lower boundary characteristics) , it may indicate that the lower intersection and the subsequent pixel of the lower intersection may be on the lower boundary of the symbol region.
  • the RANSAN algorithm may be used to determine the lower boundary of the symbol region based on the lower pixel and the subsequent pixel of the lower pixel corresponding to each of the plurality of column boundaries.
  • a row boundary may refer to a boundary between two adjacent rows of symbol characters.
  • the processing device 104 may determine a plurality of row boundaries among the plurality of symbol characters. Each of the plurality of row boundaries may correspond to two adjacent rows of symbol characters of the plurality of symbol characters. In some embodiments, the plurality of row boundaries may be determined based on the plurality of row lines. In some embodiments, the plurality of row boundaries may be determined by identifying the plurality of row boundaries from the plurality of row lines based on boundary characteristics between adjacent symbol characters.
  • widths of bars and spaces between two adjacent symbol characters in the column direction may be different.
  • a row boundary between the two adjacent symbol characters in the column direction may have characteristics (also referred to as row boundary characteristics) that the colors of all the pixels on the row boundary may be the same as the colors of corresponding pixels on a first adjacent row line of the row boundary, and the color of at least one pixel on the row boundary may be different from the color of corresponding pixel on a second adjacent row line of the row boundary.
  • the first adjacent row line may be above the row boundary, and the second adjacent row line may be blow the row boundary.
  • the first adjacent row line may be below the row boundary, and the second adjacent row line may be above the row boundary.
  • grey values of all the pixels on the row boundary may be the same as or close to grey values of corresponding pixels on the first adjacent row line of the row boundary, and the grey value of at least one pixel on the row boundary may be different from the grey value of corresponding pixel on the second adjacent row line of the row boundary.
  • FIG. 10 is a schematic diagram illustrating an exemplary row boundary between two adjacent symbol characters according to some embodiments of the present disclosure.
  • a symbol character P may be represented as 41111144.
  • row lines L1 -L4 each connecting pixels in a same row, may be determined.
  • a symbol character Q may be represented as 41111315.
  • row lines L5 -L8 each connecting pixels in a same row, may be determined.
  • grey values of all the pixels on the row line L3 may be the same or close to grey values of corresponding pixels on the row line L4.
  • the grey value of at least one pixel e.g., pixels in the rectangular boxes as shown in FIG.
  • the row line L5 may be different from the grey value of corresponding pixel on the row line L4.
  • the pixels on the row line L4 may satisfy the row boundary characteristics.
  • the row line L4 may be determined as the row boundary between the symbol character P and the symbol character Q.
  • the number or count of pixels in the symbol image may be relatively large (i.e., the symbol image may be a high resolution (e.g., 7680 ⁇ 4320 pixels) ) .
  • the processing device 104 may sample a particular number or count of pixels on each row line from all the pixels on each row line, and determine the plurality of row boundaries among the plurality of symbol characters based on the sampled pixels on each row line.
  • the processing device 104 may sample a particular number or count of row lines from the plurality of row lines, and determine the plurality of row boundaries among the plurality of symbol characters based on the sampled row lines.
  • the processing device 104 may determine a character region corresponding to the symbol character for each of the plurality of symbol characters based on the plurality of column boundaries and the plurality of row boundaries.
  • the plurality of column boundaries and the plurality of row boundaries may divide the symbol region into a plurality of character regions.
  • Each of the plurality of character regions may correspond to one of the plurality of symbol characters.
  • the processing device 104 may decode each symbol character based on grey values associated with the character region corresponding to the symbol character.
  • the processing device 104 may divide the character region into a plurality of blocks.
  • the character region may be divided into the plurality of blocks (e.g., 17 blocks) along the row direction.
  • the processing device 104 may determine a global gray value of each of the plurality of blocks.
  • Each of the plurality of blocks may include one or more pixels (e.g., 1 ⁇ 4 pixels (i.e., one pixel in a row and 4 pixels in a column) ) .
  • Grey values of the one or more pixels of the block may be obtained.
  • the global gray value of the block may be an overall representation of the grey values of the one or more pixels of the block.
  • the global gray value of the block may be a mean value of gray values of the one or more pixels of the block.
  • the processing device 104 may determine a contrast value of the symbol character corresponding to the character region based on the global grey values of the plurality of blocks. In some embodiments, the processing device 104 may identify blocks of a first type (e.g., blocks belonging to spaces of the symbol character) and blocks of a second type (e.g., blocks belonging to bars of the symbol character) from the plurality of blocks in the character region. The contrast value of the symbol character may be determined based at least on the global grey values of the blocks of the first type and the global grey values of the blocks of the second type.
  • a first type e.g., blocks belonging to spaces of the symbol character
  • blocks of a second type e.g., blocks belonging to bars of the symbol character
  • the processing device 104 may determine a codeword corresponding to the symbol character based on the contrast value.
  • the processing device 104 may obtain a plurality of preset codewords (e.g., 2787 codewords) .
  • Each of the plurality of codewords may correspond to a predetermined codeword string.
  • a reference contrast value of each of the plurality of preset codewords may be determined based on the corresponding predetermined codeword string.
  • the processing device 104 may determine a similarity value between each of the plurality of preset codewords and the symbol character based on reference contrast value of each codeword and the contrast value of the symbol character.
  • the processing device 104 may determine the codeword corresponding to the symbol character based on the similarity values.
  • the codeword corresponding to the symbol character may be or include decoded data (e.g., numbers, text, vectors, etc. ) corresponding to the symbol in the symbol image. Details regarding the decoding of a symbol character can be found elsewhere in the present disclosure, for example, FIG. 14 and the descriptions thereof.
  • the decoded data may be transmitted by the transmitter 107 to the terminal 108 through the network 112.
  • the terminal 108 may receive the decoded data corresponding to the symbol in the symbol image through the network 112, and display the decoded data to a user or perform further operations such as payment, identity authentication, registration, etc.
  • the processing device 104 may determine the plurality of column boundaries among the plurality of symbol characters and the plurality of row boundaries among the plurality of symbol characters based on the row lines, the column boundary characteristics, and the row boundary characteristics.
  • the symbol region may be segmented into a plurality of character regions based on the plurality of column boundaries among the plurality of symbol characters and the plurality of row boundaries among the plurality of symbol characters. Then the processing device 104 may decode the symbol character corresponding to each of the plurality of character regions based on grey values associated with the character region corresponding to the symbol character.
  • the position of each of the plurality of symbol characters may be determined more accurately, the codeword corresponding to the symbol character may be determined more efficiently and accurately based on the plurality of preset codewords, and errors in the decoding process caused by factors such as image distortion, uneven light, etc., may be reduced or eliminated, thus improving the effectiveness and accuracy of the decoding process.
  • the upper boundary and the lower boundary of the symbol region may be determined in various ways. As illustrated in FIG. 11, a symbol image 1100 including a symbol 1110 is provided. A plurality of scan lines L 1 -L n may be determined in the symbol image 1100. Each two neighboring scan lines of the plurality of scan lines L 1 -L n may have a same distance or different distances. The plurality of scan lines L 1 -L n may be parallel to the length direction of the symbol 1110. Each of the plurality of scan lines L 1 -L n may traverse pixels in a same row in a symbol region ABCD. The processing device 104 may determine a plurality of column boundaries S 1 –S n among a plurality of symbol characters in the symbol region ABCD. In some embodiments, the plurality of column boundaries S 1 –S n may be determined based on the plurality of scan lines L 1 -L n .
  • the processing device 104 may determine a plurality of intersections of the plurality of scan lines and the column boundary.
  • the processing device 104 may identify an intersection at the upmost position among the plurality of intersections (also referred to as upper intersection) .
  • the processing device 104 may further perform an upward traverse from the intersection at the upmost position until an upper point is identified.
  • the upward traverse may be an operation for traversing positions above the intersection at the upmost position one by one upwards along the column boundary.
  • an upper boundary L u of the symbol region ABCD may have characteristics (also referred to as upper boundary characteristics) that the upper point and a point subsequent to the upper point along the row direction do not satisfy the column boundary characteristics, and a reference point blow the upper point along the column boundary and a point subsequent to the reference point along the row direction satisfy the column boundary characteristics.
  • the reference point and the point subsequent to the reference point may be on the upper boundary L u of the symbol region ABCD.
  • the upper point, the reference point, the point subsequent to the upper point, and/or the point subsequent to the reference point may be different from the intersection at the upmost position.
  • the upper point, the reference point, the point subsequent to the upper point, and/or the point subsequent to the reference point may be pixels.
  • the upper point and a point subsequent to the upper point along the row direction may be determined.
  • the upper point and the subsequent point of the upper point may be located at different sides of the column boundary.
  • the processing device 104 may determine whether the upper point and the subsequent point of the upper point, and a reference point below the upper point and the subsequent point of the reference point satisfy the column boundary characteristics. If the upper point and the subsequent point of the upper point, and a reference point below the upper point and the subsequent point of the reference point satisfy the column boundary characteristics (i.e., the upper point and the reference point satisfy the upper boundary characteristics) , it may indicate that the reference point and the subsequent point of the reference point may be on the upper boundary L u of the symbol region ABCD.
  • the upper boundary L u of the symbol region ABCD may be determined based on a plurality of reference points corresponding to the plurality of column boundaries S 1 –S n .
  • the RANSAN algorithm may be used to determine the upper boundary L u of the symbol region ABCD based on the plurality of reference points.
  • the lower boundary L d of the symbol region ABCD may be determined.
  • FIG. 12 is a flow chart illustrating an exemplary process for determining a plurality of column boundaries among a plurality of symbol characters in a symbol region of a symbol according to some embodiments of the present disclosure.
  • the process 1200 may be implemented on the image processing system 100 as illustrated in FIG. 1.
  • the process 1200 may be stored in a storage medium (e.g., the network storage device 113, or the storage 227 of the computing device 220) as a form of instructions, and invoked and/or executed by the processing device 104.
  • the operations in the process 1200 presented below are intended to be illustrative. In some embodiments, the process 1200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1200 as illustrated in FIG. 12 and described below may not be intended to be limiting.
  • the processing device 104 may determine a width of a reference symbol character associated with the plurality of symbol characters.
  • the reference symbol character may be a start symbol character or an end symbol character.
  • the plurality of symbol characters in the symbol region may have a same width.
  • a width of the start symbol character may be equal to a width of a symbol character in the symbol region.
  • a width of the end symbol character may be proportional to a width of a symbol character in the symbol region.
  • the processing device 104 may obtain a codeword string corresponding to the reference symbol character.
  • the codeword string corresponding to the reference symbol character may also be referred to as predetermined codeword string.
  • the predetermined codeword string may be a standard codeword string. In some other embodiments, the predetermined codeword string corresponding to the reference symbol character may be determined by a user, according to default settings of the image processing system 100, etc.
  • the processing device 104 may determine the arrangement and/or widths of bars and spaces of the reference symbol character based on the predetermined codeword string. In some embodiments, the processing device 104 may further determine grey values of at least a part of pixels corresponding to the bars and spaces of the reference symbol character.
  • the at least a part of pixels may be, for example, pixels in at least one row corresponding to the bars and spaces of the reference symbol character.
  • the grey values of the at least a part of pixels may also be referred to as predetermined grey values.
  • the processing device 104 may compare the predetermined grey values corresponding to the reference symbol character with grey values of pixels on at least one of the plurality of row lines. Based on the comparison, the processing device 104 may identify at least one line segment on the at least one row line. Grey values of pixels of the at least one line segment may match to the predetermined grey values. The processing device 104 may designate a length of the at least one line segment as the width of the reference symbol character.
  • a predetermined codeword string corresponding to the start symbol character may be 81111113.
  • each of the 17 blocks constituting a symbol character is represented by one pixel in the symbol image.
  • the pixels may be represented by the black and white dots as illustrated in FIG. 13.
  • a start symbol character 1310 may include 8 pixels as a first bar, 1 pixel as a first space, 1 pixel as a second bar, 1 pixel as a second space, 1 pixel as a third bar, 1 pixel as a third space, 1 pixel as a fourth bar, and 3 pixels as a fourth space.
  • Gray values of pixels corresponding to the start symbol character 1310 i.e., predetermined grey values
  • the processing device 104 may identify at least one line segment 1030 on a row line 1320 based on the predetermined grey values. Grey values of pixels of the at least one line segment 1330 may match to the predetermined grey values.
  • the processing device 104 may designate a length of the line segment 1320 as the width of the start symbol character.
  • a predetermined codeword string corresponding to the end symbol character (also referred to as end codeword string) may be 711311121. The width of the end symbol character may be determined similarly.
  • the processing device 104 may identify the reference symbol character from the symbol image when the at least one line segment is determined.
  • Each of the line segment may include a start point and an end point. Start points of two or more line segments may form a start edge of the reference symbol character. End points of the two or more line segments may form an end edge of the reference symbol character.
  • the processing device 104 may identify two or more line segments corresponding to the start symbol character on the plurality of row lines. End points of the two or more line segments corresponding to the start symbol character (also referred to as end points of the start symbol character) may form the end edge of the start symbol character.
  • a random sample consensus (RANSAN) algorithm may be used to determine the end edge of the start symbol character based on the end points of the two or more line segments corresponding to the start symbol character.
  • the end edge of the start symbol character may be determined by fitting a line according to the RANSAN algorithm based on the end points of the two or more line segments corresponding to the start symbol character.
  • the start symbol character may be out of the symbol region. The end edge of the start symbol character may coincide with the start boundary of the symbol region. In this case, the start boundary of the symbol region may be determined.
  • the processing device 104 may identify two or more line segments corresponding to the end symbol character on the plurality of row lines. Start points of the two or more line segments corresponding to the end symbol character (also referred to as start points of the end symbol character) may form the start edge of the end symbol character.
  • the RANSAN algorithm may be used to determine the start edge of the end symbol character based on the start points of the two or more line segments corresponding to the end symbol character.
  • the end symbol character may be out of the symbol region.
  • the start edge of the end symbol character may coincide with the end boundary of the symbol region. In this case, the end boundary of the symbol region may be determined.
  • the plurality of column boundaries may be determined between the start boundary and the end boundary of the symbol region.
  • the processing device 104 may determine a reference width range based on the width of the reference symbol character.
  • the reference width range may be determined by increasing the width of the reference symbol character by an increment (e.g., 0.2 millimeters, 0.5 millimeters, 1 millimeter, 1 pixel, 2 pixels, etc. ) and/or decreasing the width of the reference symbol character by a decrement (e.g., 0.2 millimeters, 0.5 millimeters, 1 millimeter, 1 pixel, 2 pixels, etc. ) .
  • the increment and/or the decrement may be defined as an error corresponding to the symbol character.
  • each of the plurality of symbol characters may correspond to a same error.
  • at least one of the plurality of symbol characters may correspond to a different error.
  • the processing device 104 may determine the reference width range for each of the plurality of symbol characters in the symbol region based on the width of the start symbol character and an error corresponding to the symbol character. If the reference symbol character is an end symbol character, the processing device 104 may determine the reference width range for each of the plurality of symbol characters in the symbol region based on the width of the end symbol character, a width ratio, and an error corresponding to the symbol character.
  • the width ratio may be a ratio of a width of the symbol character to the width of the end symbol character.
  • FIG. 9 is a partial enlarged view of a PDF 417 barcode according to some embodiments of the present disclosure.
  • the partial enlarged view includes two symbol characters M and N.
  • the reference width range may be determined based on the width of the start symbol character and an error.
  • the error may be represented by an increment of two pixels and a decrement of two pixels shown in the parentheses 910 on each of the plurality of row lines.
  • the processing device 104 may determine the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range.
  • a column boundary S2 between the symbol character M and the symbol character N is determined based on the column boundary characteristics between adjacent symbol characters and the reference width range.
  • the color of a last block 920 (represented by 4 pixels in a column) of the symbol character M and the color of a first block 930 (represented by 4 pixels in a column) of the symbol character N may change from white to black along the row direction. Accordingly, grey values of pixels of the blocks 920 and 930 may decrease (e.g., from 255 to 0) along the row direction.
  • the processing device 104 may identify a plurality of pixels or points (e.g., pixels, midpoints each of which being between two consecutive pixels, etc. ) on at least two of the plurality of row lines in the symbol region that satisfy the column boundary characteristics between adjacent symbol characters and the reference width range.
  • the RANSAN algorithm may be used to determine the plurality of column boundaries based on the plurality of points. It should be noted that the RANSAN algorithm is provided for illustration purposes and not intended to be limiting. Any algorithms or models for curve fitting may be used to determine the column boundaries among the plurality of symbol characters.
  • the plurality of row lines L1-L8 along the symbol direction may be curves due to distortion of the symbol image 800.
  • a plurality of pixels on the plurality of row lines L1-L8 that satisfy the column boundary characteristics between adjacent symbol characters and the reference width range may be identified.
  • Column boundaries S1-S5 may be formed based on the plurality of identified pixels and the RANSAN algorithm.
  • FIG. 14 is a flow chart illustrating an exemplary process for decoding a symbol character based on grey values associated with a character region corresponding to the symbol character according to some embodiments of the present disclosure.
  • the process 1400 may be implemented on the image processing system 100 as illustrated in FIG. 1.
  • the process 1400 may be stored in a storage medium (e.g., the network storage device 113, or the storage 227 of the computing device 200) as a form of instructions, and invoked and/or executed by the processing device 104.
  • the operations in the process 1400 presented below are intended to be illustrative. In some embodiments, the process 1400 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1400 as illustrated in FIG. 14 and described below may not be intended to be limiting. In some embodiment, the operation 560 of the process 500 in FIG. 5 may be performed according to the process 1400.
  • the processing device 104 may divide, along the row direction, the character region corresponding to the symbol character into a plurality of blocks.
  • the processing device 104 may divide, along the row direction, the character region corresponding to the symbol character into a preset number or count of blocks equally.
  • the preset number or count may be 17.
  • the processing device 104 may determine a global gray value of each of the plurality of blocks.
  • Each of the plurality of blocks may include one or more pixels.
  • a block may be represented by 1 ⁇ 4 pixels (i.e., one pixel in a row and 4 pixels in a column) , 1 ⁇ 3 pixels (i.e., one pixel in a row and 3 pixels in a column) , 2 ⁇ 7 pixels (i.e., 2 pixel in a row and 7 pixels in a column) , etc.
  • Grey values of the one or more pixels of the block may be obtained.
  • the global gray value of the block may be an overall representation of the grey values of the one or more pixels of the block. In some embodiment, the global gray value of the block may be a mean value of gray values of the one or more pixels of the block.
  • the mean value may be, for example, an arithmetic mean value, a harmonic mean value, a quadratic mean value, etc.
  • a grey value of a particular pixel of the block may be determined as the global gray value of the block.
  • the particular pixel may be specified by a user, according to default settings of the data imaging system 100, etc.
  • the particular pixel may be selected from the one or more pixels randomly.
  • the processing device 104 may determine a contrast value of the symbol character based on the global grey values of the plurality of blocks.
  • the processing device 104 may identify blocks of a first type and blocks of a second type from the plurality of blocks in the character region.
  • the blocks of the first type may refer to blocks belonging to spaces of the symbol character
  • the blocks of the second type may refer to blocks belonging to bars of the symbol character.
  • the blocks of the first type may be in the white color
  • the blocks of the second type may be in the black color.
  • the blocks of the first type may refer to blocks belonging to bars of the symbol character
  • the blocks of the second type may refer to blocks belonging to spaces of the symbol character.
  • the processing device 104 may identify blocks of a first type and blocks of a second type from the plurality of blocks in the character region based on the global grey values of the plurality of blocks.
  • the blocks of the first type includes blocks belonging to spaces of the symbol character
  • the blocks of the second type includes blocks belonging to bars of the symbol character
  • the blocks of the first type may have greater global grey values (e.g., 255, 240, 230, 220, etc. )
  • the blocks of the second type may have smaller global grey values (e.g., 0, 10, 20, 30, etc. ) .
  • the contrast value of the symbol character may be determined based at least on the global grey values of the blocks of the first type and the global grey values of the blocks of the second type. For example, the processing device 104 may determine a first ratio of grey values of blocks of the first type in the character region to a count of the blocks of the first type. Similarly, the processing device 104 may determine a second ratio of grey values of blocks of the second type in the character region to a count of the blocks of the second type. The contrast value of the symbol character may be determined based on a difference value between the first ratio and the second ratio. Merely by way of example, the contrast value of the symbol character may be determined according to Formula (1) :
  • contrast denotes the contrast value of the symbol character
  • spaceGraySum denotes a sum of global grey values of blocks in spaces of the symbol charactor
  • spaceNum denotes the number or count of the blocks in spaces
  • barGrayNum denotes a sum of global grey values of blocks in bars of the symbol charactor
  • barNum denotes the number or count of the blocks in bars.
  • the processing device 104 may determine a codeword corresponding to the symbol character based on the contrast value.
  • the processing device 104 may obtain a plurality of preset codewords (e.g., 2787 codewords) .
  • the plurality of preset codewords may be obtained from, for example, a storage device (e.g., the network storage device 113, or the storage 227 of the computing device 228, etc. ) of the image processing system 100 or an external device (e.g., a cloud database) .
  • the plurality of preset codewords may be numbers (e.g., natural numbers) .
  • Each of the plurality of codewords may correspond to a predetermined codeword string.
  • Correspondence relationships between the plurality of preset codewords and the plurality of predetermined codeword strings may be provided in a matrix, a vector, a data array, a table, etc.
  • a plurality of codewords and corresponding codeword strings are provided in Table 1.
  • a reference contrast value of each of the plurality of preset codewords may be determined.
  • the determination of the reference contrast value of each of the plurality of preset codewords may be according to Formula (1) , which is similar to or the same as the determination of the contrast value of the symbol character as described in 1430, and will not be repeated.
  • a similarity value between each of the plurality of preset codewords and the symbol character may be determined based on reference contrast value of each codeword and the contrast value of the symbol character.
  • the processing device 104 may determine the codeword corresponding to the symbol character based on the similarity values.
  • the similarity values may be ranked (e.g., in an ascending order or a descending order) .
  • the processing device 104 may identify a greatest similarity value from the determined similarity values.
  • a preset codeword corresponding to the identified similarity value may be determined as the codeword corresponding to the symbol character.
  • FIG. 15 is a flow chart illustrating an exemplary process for decoding symbol characters of a symbol in a symbol image according to some embodiments of the present disclosure.
  • the process 1500 may be implemented on the image processing system 100 as illustrated in FIG. 1.
  • the process 1500 may be stored in a storage medium (e.g., the network storage device 113, or the storage 227 of the computing device 200) as a form of instructions, and invoked and/or executed by the processing device 104.
  • the operations in the process 1500 presented below are intended to be illustrative. In some embodiments, the process 1500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1500 as illustrated in FIG. 15 and described below may not be intended to be limiting.
  • FIG. 16 is a schematic diagram of an exemplary PDF 417 barcode.
  • the processing device 104 may obtain a symbol image of a symbol including a plurality of symbol characters in a symbol region.
  • the symbol image 1600 may be obtained from the image source 101 of the image processing system 100.
  • the PDF 417 barcode in the symbol image 1600 may include a start symbol character in a region EADF, a symbol region ABCD, and an end symbol character BGHC.
  • the PDF 417 barcode may include a plurality of symbol characters in the symbol region ABCD.
  • the processing device 104 may determine a plurality of row lines along a length direction of the symbol.
  • a plurality of scan lines may be determined in a positioning box.
  • the positioning box may be used to determine a length direction of the PDF 417 barcode (e.g., the horizontal direction as shown in FIG. 16) .
  • Each of the plurality of scan lines may traverse pixels in a same row along the length direction of the PDF 417 barcode.
  • the plurality of scan lines may be determined as the row lines.
  • a plurality of row lines P1-P19 may be determined.
  • Each of the plurality of row lines P1-P19 may traverse pixels in a same row in the symbol image 1600.
  • each of the plurality of row lines P1-P19 may be determined by connecting pixels in a same row in the symbol image 1600 along the length direction of the PDF 417 barcode.
  • the processing device 104 may determine a start boundary and an end boundary of the symbol region.
  • the processing device 104 may determine the arrangement and/or widths of bars and spaces of the start symbol character and/or the end symbol character.
  • the processing device 104 may further determine grey values of at least a part of pixels corresponding to the bars and spaces of the start symbol character and/or the end symbol character (also referred to as predetermined grey values corresponding to the start symbol character and/or the end symbol character) .
  • the predetermined grey values corresponding to the start symbol character and/or the end symbol character may be compared with grey values of pixels on at least one of the plurality of row lines. Based on the comparison, the processing device 104 may identify at least one line segment on at least one of the plurality of row lines corresponding to the start symbol character and/or the end symbol character.
  • the processing device 104 may identify two or more line segments corresponding to the start symbol character on the plurality of row lines. Each of the two or more line segments may include a start point and an end point. As illustrated in FIG. 16, start points of the start symbol character of the PDF 417 barcode may be intersections between a line segment EF and the plurality of row lines P1-P19. End points of the start symbol character of the PDF 417 barcode may be intersections between a line segment AD and the plurality of row lines P1-P19.
  • Start points of the two or more line segments corresponding to the start symbol character may form a start edge of the start symbol character. End points of the two or more line segments corresponding to the start symbol character may form the end edge of the start symbol character.
  • a random sample consensus (RANSAN) algorithm may be used to determine the end edge of the start symbol character based on the end points of the two or more line segments corresponding to the start symbol character.
  • the end edge of the start symbol character may be determined by fitting a line according to the RANSAN algorithm based on the end points of the two or more line segments corresponding to the start symbol character.
  • the start symbol character may be out of the symbol region. The end edge of the start symbol character may coincide with the start boundary of the symbol region. In this case, the start boundary of the symbol region may be determined. Referring to FIG. 16, the start boundary of the symbol region may be A1.
  • the processing device 104 may identify two or more line segments corresponding to the end symbol character on the plurality of row lines. Each of the two or more line segments may include a start point and an end point. As illustrated in FIG. 16, start points of the end symbol character of the PDF 417 barcode may be intersections between a line segment BC and the plurality of row lines P1-P19. End points of the end symbol character of the PDF 417 barcode may be intersections between a line segment GH and the plurality of row lines P1-P19.
  • Start points of the two or more line segments corresponding to the end symbol character may form a start edge of the end symbol character.
  • End points of the two or more line segments corresponding to the end symbol character may form the end edge of the end symbol character.
  • a random sample consensus (RANSAN) algorithm may be used to determine the start edge of the end symbol character based on the start points of the two or more line segments corresponding to the end symbol character.
  • the start edge of the end symbol character may be determined by fitting a line according to the RANSAN algorithm based on the start points of the two or more line segments corresponding to the end symbol character.
  • the end symbol character may be out of the symbol region.
  • the start edge of the end symbol character may coincide with the end boundary of the symbol region. In this case, the end boundary of the symbol region may be determined. Referring to FIG. 16, the end boundary of the symbol region may be A5.
  • the processing device 104 may determine, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters between the start boundary and the end boundary of the symbol region.
  • the processing device 104 may obtain a length of at least one line segment corresponding to the start symbol character.
  • the length of the at least one line segment corresponding to the start symbol character may be designated as the width of the start symbol character.
  • a line segment between an intersection O1 of the row line P1 and the line segment BC and an intersection O2 of the row line P1 and the line segment AD may be a line segment corresponding to the start symbol character of the PDF 417 barcode 1600.
  • a length of the line segment O1 O2 may be designated as the width of the start symbol character.
  • the processing device 104 may determine the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range.
  • the reference width range may be a range defined an increment (e.g., two pixels) and a decrement (e.g., two pixels) .
  • the processing device 104 may identify a plurality of pixels or points (e.g., pixels, midpoints each of which being between two consecutive pixels, etc. ) on at least two of the plurality of row lines (e.g., P1-P19) in the symbol region that satisfy the column boundary characteristics between adjacent symbol characters and the reference width range.
  • the RANSAN algorithm may be used to determine the plurality of column boundaries based on the plurality of points.
  • the plurality of column boundaries A2, A3, A4, and A5 may be determined.
  • the processing device 104 may determine an upper boundary and a lower boundary of the symbol region.
  • the processing device 104 may determine a plurality of intersections (e.g., pixels) of the plurality of row lines and the column boundary.
  • the processing device 104 may perform an upward traverse and identify an upper intersection among the plurality of intersections.
  • a pixel above the upper intersection (also referred to as upper pixel) along the column boundary and a pixel subsequent to the upper pixel along the row direction (also referred to as subsequent pixel of the upper pixel) in a same row may be determined.
  • the upper pixel and the subsequent pixel of the upper pixel may be located at different sides of the column boundary.
  • the processing device 104 may determine whether the upper intersection and the subsequent pixel of the upper intersection, and the upper pixel and the subsequent pixel of the upper pixel satisfy the column boundary characteristics (i.e., the color of the upper pixel is white and the color of the subsequent pixel of the upper pixel is black, the colors of the two pixels may change from white to black along the row direction) . If the upper pixel and the subsequent pixel of the upper pixel do not satisfy the column boundary characteristics and the upper intersection and the pixel subsequent to the upper intersection along the row direction satisfy the column boundary characteristics (i.e., the upper pixel and the upper intersection satisfy upper boundary characteristics) , it may indicate that the upper intersection and the subsequent pixel of the upper intersection may be on the upper boundary of the symbol region.
  • the column boundary characteristics i.e., the color of the upper pixel is white and the color of the subsequent pixel of the upper pixel is black, the colors of the two pixels may change from white to black along the row direction
  • a pixel O3, which is an intersection of a column boundary A2 and a row line P1 may be determined as an upper intersection.
  • a pixel subsequent to the upper intersection O3 along the row direction (also referred to as subsequent pixel of the upper intersection) may be O6.
  • the upper intersection O3 and the subsequent pixel of the upper intersection O6 satisfy the column boundary characteristics, and an upper pixel O4 of the upper intersection O3 and a subsequent pixel of the upper pixel O5 do not satisfy the column boundary characteristics (i.e., the upper pixel O4 and the upper intersection O3 satisfy upper boundary characteristics) , it may indicate that the upper intersection O3 and the subsequent pixel of the upper intersection O4 may be on the upper boundary of the symbol region.
  • pixels on the upper boundary of the symbol region may be determined according to the column boundaries A3, A4, and A5.
  • the RANSAN algorithm may be used to determine the upper boundary of the symbol region based on the upper intersection and the subsequent pixel of the upper intersection corresponding to each of the plurality of column boundaries.
  • the upper boundary may be P1 as illustrated in FIG. 16.
  • the processing device 104 may perform a downward traverse and identify a lower intersection among the plurality of intersections.
  • a pixel beneath the lower intersection (also referred to as lower pixel) along the column boundary and a pixel subsequent to the lower pixel along the row direction (also referred to as subsequent pixel of the lower pixel) in a same row may be determined.
  • the lower pixel and the subsequent pixel of the lower pixel may be located at different sides of the column boundary.
  • the processing device 104 may determine whether the lower intersection and the subsequent pixel of the lower intersection, and the lower pixel and the subsequent pixel of the lower pixel satisfy the column boundary characteristics. If the lower pixel and the subsequent pixel of the lower pixel do not satisfy the column boundary characteristics and the lower intersection and the pixel subsequent to the lower intersection along the row direction satisfy the column boundary characteristics (i.e., the lower pixel and the lower intersection satisfy lower boundary characteristics) , it may indicate that the lower intersection and the subsequent pixel of the lower intersection may be on the lower boundary of the symbol region.
  • a pixel O7 which is an intersection of a column boundary A2 and a row line P19, may be determined as a lower intersection.
  • a pixel subsequent to the lower intersection O7 along the row direction (also referred to as subsequent pixel of the lower intersection) may be O8.
  • the lower intersection O7 and the subsequent pixel of the lower intersection O8 may satisfy the column boundary characteristics, and an lower pixel O9 of the lower intersection O7 and a subsequent pixel of the lower pixel O10 do not satisfy the column boundary characteristics (i.e., the lower pixel O9 and the lower intersection O7 satisfy lower boundary characteristics) , it may indicate that the lower intersection O7 and the subsequent pixel of the lower intersection O8 may be on the lower boundary of the symbol region.
  • pixels on the lower boundary of the symbol region may be determined according to the column boundaries A3, A4, and A5.
  • the RANSAN algorithm may be used to determine the lower boundary of the symbol region based on the lower intersection and the subsequent pixel of the lower intersection corresponding to each of the plurality of column boundaries.
  • the lower boundary may be P19 as illustrated in FIG. 16.
  • the processing device 104 may determine a plurality of row boundaries among the plurality of symbol characters between the upper boundary and the lower boundary of the symbol region.
  • the plurality of row boundaries may be determined by identifying the plurality of row boundaries from the plurality of row lines based on boundary characteristics between adjacent symbol characters.
  • widths of bars and spaces between two adjacent symbol characters in the column direction may be different.
  • a row boundary between the two adjacent symbol characters in the column direction may have characteristics (also referred to as row boundary characteristics) that the colors of all the pixels on the row boundary may be the same as the colors of corresponding pixels on a first adjacent row line of the row boundary, and the color of at least one pixel on the row boundary may be different from the color of corresponding pixel on a second adjacent row line of the row boundary.
  • the first adjacent row line may be above the row boundary, and the second adjacent row line may be blow the row boundary.
  • the first adjacent row line may be below the row boundary, and the second adjacent row line may be above the row boundary.
  • grey values of all the pixels on the row boundary may be the same as or close to grey values of corresponding pixels on the first adjacent row line of the row boundary, and the grey value of at least one pixel on the row boundary may be different from the grey value of corresponding pixel on the second adjacent row line of the row boundary.
  • row lines P4, P7, P10, P13, and P16 may satisfy the row boundary characteristics.
  • the row lines P4, P7, P10, P13, and P16 may be designated as the row boundaries among the plurality of symbol characters.
  • the processing device 104 may determine a character region corresponding to the symbol character for each of the plurality of symbol characters based on the plurality of column boundaries and the plurality of row boundaries.
  • a plurality of character regions corresponding to the symbol characters may be determined based on the row boundaries P4, P7, P10, P13, and P16 and the column boundaries A2, A3, and A4.
  • the processing device 104 may decode each symbol character based on grey values associated with the character region corresponding to the symbol character.
  • the operation for decoding each symbol character may be similar to or the same as the operations 1410 through 1440 of the process 1400 as illustrated in FIG. 14, and will not be repeated here.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “module, ” “unit, ” “component, ” “device, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
  • LAN local area network
  • WAN wide area network
  • SaaS Software as a Service

Abstract

A system and method for decoding symbol characters. The method may include: obtaining a symbol image of a symbol including a plurality of symbol characters in a symbol region; determining a plurality of row lines along a length direction of the symbol; determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters and a plurality of row boundaries among the plurality of symbol characters; for each of the plurality of symbol characters, determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.

Description

SYSTEMS AND METHODS FOR BARCODE DECODING
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority of Chinese Patent Application No. 202010378011.9, filed on May 7, 2020, the contents of which are hereby incorporated by reference.
TECHNICAL FIELD
This disclosure generally relates to image processing, and more particularly, relates to systems and methods for decoding a barcode in an image.
BACKGROUND
Barcode symbols such as portable data file (PDF) 417 barcodes are widely used in daily lives, such as social activities, certificates, management, transportation, payment, etc. A barcode symbol has a series of bars and spaces encoding data of high-density. The barcode symbol can be read from an image using a scanner, such as a camera. After the barcode symbol is read from the image, the data encoded in the barcode symbol can be decoded using a processing device. However, during the decoding process, errors usually occur due to factors such as image distortion, uneven light, etc. Therefore, it is desirable to provide systems and methods for decoding the barcode symbol accurately and efficiently.
SUMMARY
In an aspect of the present disclosure, a system is provided. The system may comprise at least one storage device storing a set of instructions; and at least one processor configured to communicate with the at least one storage device. When executing the set of instructions, the at least one processor is directed to perform operations including obtaining a symbol image of a symbol including a plurality of symbol characters in a symbol region; determining a plurality of row lines along a length direction of the symbol; determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters,  each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters; determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and for each of the plurality of symbol characters, determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.
In another aspect of the present disclosure, a method is provided. The method may be implemented on a computing device having a processor and a computer-readable storage device. The method may comprise obtaining a symbol image of a symbol including a plurality of symbol characters in a symbol region; determining a plurality of row lines along a length direction of the symbol; determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters, each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters; determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and for each of the plurality of symbol characters, determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.
In a further aspect of the present disclosure, a non-transitory readable medium is provided. The non-transitory readable medium comprises at least one set of instructions, wherein when executed by at least one processor of a computing device, the at least one set of instructions directs the at least one processor to perform a method. The method may comprise obtaining a symbol image of a  symbol including a plurality of symbol characters in a symbol region; determining a plurality of row lines along a length direction of the symbol; determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters, each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters; determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and for each of the plurality of symbol characters, determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.
In some embodiments, determining, based on the plurality of row lines, the plurality of column boundaries among the plurality of symbol characters includes determining a width of a reference symbol character associated with the plurality of symbol characters; determining a reference width range based on the width of the reference symbol character; and determining the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range.
In some embodiments, the reference symbol character includes a start symbol character or an end symbol character.
In some embodiments, determining the width of the reference symbol character includes obtaining a preset codeword string associated with the reference symbol character; for at least one of the plurality of row lines, identifying at least one reference line segment based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with the preset codeword string; and designating a length of the at least one reference line segment as the width of the reference symbol character.
In some embodiments, determining, based on the plurality of row lines, the  plurality of row boundaries among the plurality of symbol characters includes identifying the plurality of row boundaries among the plurality of symbol characters from the plurality of row lines based on boundary characteristics between adjacent symbol characters.
In some embodiments, for each of the plurality of symbol characters, decoding the symbol character based on grey values associated with the character region corresponding to the symbol character includes dividing, along a row direction, the character region corresponding to the symbol character into a plurality of blocks; determining a global gray value of each of the plurality of blocks; determining a contrast value of the symbol character based on the global grey values of the plurality of blocks; and determining a codeword corresponding to the symbol character based on the contrast value.
In some embodiments, determining the contrast value of the symbol character based on the global grey values of the plurality of blocks includes determining a first ratio of grey values of blocks of a first type in the character region to a count of the blocks of the first type; determining a second ratio of grey values of blocks of a second type in the character region to a count of the blocks of the second type; and determining the contrast value of the symbol character based on a difference value between the first ratio and the second ratio.
In some embodiments, the operations further including determining a start boundary, an end boundary, an upper boundary, and a lower boundary of the symbol region.
In some embodiments, determining the start boundary of the symbol region includes for at least one of the plurality of row lines, identifying at least one end point of a start symbol character based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with a start codeword string; and determining the start boundary of the symbol region based on the at least one end point of the start symbol character.
In some embodiments, determining the end boundary of the symbol region includes for at least one of the plurality of row lines, identifying at least one start point  of an end symbol character based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with an end codeword string; and determining the end boundary of the symbol region based on the at least one start point of the end symbol character.
In some embodiments, determining the upper boundary of the symbol region includes for each of the plurality of column boundaries, determining a plurality of intersections of the plurality of row lines and the column boundary; performing an upward traverse until an upper pixel of a first intersection is identified, wherein the upper pixel and the first intersection satisfy upper boundary characteristics; and determining the upper boundary of the symbol region based on the first intersection of each of the plurality of column boundaries.
In some embodiments, determining the lower boundary of the symbol region includes: for each of the plurality of column boundaries, determining a plurality of intersections of the plurality of row lines and the column boundary; performing a downward traverse until a lower pixel of a second intersection is identified, wherein the lower pixel and the second intersection satisfy lower boundary characteristics; and determining the lower boundary of the symbol region based on the second intersection of each of the plurality of column boundaries.
Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. The drawings are not to scale. These embodiments  are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a schematic diagram illustrating an exemplary image processing system according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure;
FIG. 3 is a schematic diagram illustrating exemplary components of an exemplary terminal according to some embodiments of the present disclosure;
FIG. 4 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure;
FIG. 5 is a flow chart illustrating an exemplary process for decoding symbol characters of a symbol in a symbol image according to some embodiments of the present disclosure;
FIGs. 6 and 7 illustrate an exemplary PDF 417 barcode according to some embodiments of the present disclosure;
FIG. 8 is a schematic diagram illustrating exemplary row lines in a symbol image according to some embodiments of the present disclosure;
FIG. 9 is a partial enlarged view of a PDF 417 barcode according to some embodiments of the present disclosure;
FIG. 10 is a schematic diagram illustrating an exemplary row boundary between two adjacent symbol characters according to some embodiments of the present disclosure;
FIG. 11 illustrates an exemplary symbol image according to some embodiments of the present disclosure;
FIG. 12 is a flow chart illustrating an exemplary process for determining a plurality of column boundaries among a plurality of symbol characters in a symbol region of a symbol according to some embodiments of the present disclosure;
FIG. 13 illustrates an exemplary symbol image according to some embodiments of the present disclosure;
FIG. 14 is a flow chart illustrating an exemplary process for decoding a symbol character based on grey values associated with a character region corresponding to the symbol character according to some embodiments of the present disclosure
FIG. 15 is a flow chart illustrating an exemplary process for decoding symbol characters of a symbol in a symbol image according to some embodiments of the present disclosure; and
FIG. 16 a schematic diagram of an exemplary PDF 417 barcode according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
In order to illustrate the technical solutions related to the embodiments of the present disclosure, brief introduction of the drawings referred to in the description of the embodiments is provided below. Obviously, drawings described below are only some examples or embodiments of the present disclosure. Those having ordinary skills in the art, without further creative efforts, may apply the present disclosure to other similar scenarios according to these drawings. Unless stated otherwise or obvious from the context, the same reference numeral in the drawings refers to the same structure and operation.
As used in the disclosure and the appended claims, the singular forms “a, ” “an, ” and “the” include plural referents unless the content clearly dictates otherwise. It will be further understood that the terms “comprises, ” “comprising, ” “includes, ” and/or “including” when used in the disclosure, specify the presence of stated steps and elements, but do not preclude the presence or addition of one or more other steps and elements.
Some modules of the system may be referred to in various ways according to some embodiments of the present disclosure, however, any number of different modules may be used and operated in a client terminal and/or a server. These modules are intended to be illustrative, not intended to limit the scope of the present disclosure. Different modules may be used in different aspects of the system and  method.
According to some embodiments of the present disclosure, flow charts are used to illustrate the operations performed by the system. It is to be expressly understood, the operations above or below may or may not be implemented in order. Conversely, the operations may be performed in inverted order, or simultaneously. Besides, one or more other operations may be added to the flowcharts, or one or more operations may be omitted from the flowchart.
Technical solutions of the embodiments of the present disclosure be described with reference to the drawings as described below. It is obvious that the described embodiments are not exhaustive and are not limiting. Other embodiments obtained, based on the embodiments set forth in the present disclosure, by those with ordinary skill in the art without any creative works are within the scope of the present disclosure.
An aspect of the present disclosure relates to systems and methods for image decoding. The system may obtain a symbol image including a plurality of symbol characters in a symbol region. The symbol image may be an image of a symbol (e.g., a portable data file (PDF) 417 barcode) . A plurality of row lines along a length direction of the symbol may be determined. Each row line may traverse pixels in a same row in the symbol region. The system may also determine, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters and a plurality of row boundaries among the plurality of symbol characters. For each of the plurality of symbol characters, the system may determine a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries. The system may further decode the symbol character based on grey values associated with the character region corresponding to the symbol character. In this case, the position of each of the plurality of symbol characters may be determined more accurately, a codeword corresponding to the symbol character may be determined more efficiently and accurately, and errors in the decoding process caused by factors such as image  distortion, uneven light, etc., may be reduced or eliminated, thus improving the effectiveness and accuracy of the decoding process.
FIG. 1 is a schematic diagram illustrating an exemplary image processing system according to some embodiments of the present disclosure. The image processing system 100 may process an image or a video composed of a plurality of images, and extract data from the image or the video. As shown, the image processing system 100 may include an image source 101, a processing device 104, a buffer manager 105, a buffer 106, a transmitter 107, a terminal 108 (or a plurality of terminals 108) , a network 112, and a network storage device 113 (or a plurality of network storages 113) .
The image source 101 may provide an image or a video include at least one image (also referred to as video frame) to a user of the terminal 108 through the network 112. In some embodiments, the image source 101 may include a scanner 102 and/or a media server 103.
The scanner 102 may be able to capture an image or a video including at least one image. In some embodiments, the image may be a symbol image. As used herein, the symbol image may be an image of a symbol. In some embodiments, the symbol image may be a still image or a video frame obtained from a video. The symbol image may be a two-dimensional (2D) image or a three-dimensional (3D) image. The scanner 102 may be a laser scanner, an optical scanner, etc.
In some embodiments, the optical scanner may be a camera. The camera may be, for example, a digital camera, a video camera, a security camera, a web camera, a smartphone, a tablet, a laptop, a video gaming console equipped with a web camera, a camera with multiple lenses, etc.
Merely for illustration, the camera may include a lens, a shutter, a sensor, a processing element, and a storage element. The lens may be an optical element that focuses a light beam by means of refraction to form an image. The lens may be configured to intake a target object (e.g., a barcode on a card, a paper, a bag, a package, etc. ) . An aperture of the lens may define a size of a hole through which  light passes to reach the sensor. The aperture may be adjustable to adjust the amount of light that passes through the lens. The focal length of the lens may be adjustable to adjust the coverage of the camera.
The shutter may be opened to allow light through the lens when an image is captured. The shutter may be controlled manually or automatically by the processing element.
The sensor may be configured to receive light passing through the lens and transform light signals of the received light into electrical signals. The sensor may include charge coupled device (CCD) and complementary metal-oxide semiconductor (CMOS) . The sensor may be in communication with the logic circuits, and may be configured to detect the target object using the lens and transform the received light from the target object into electronic signals.
The processing element may be configured to process data and/or information relating to the camera and/or control one or more components (e.g., the lens, the shutter) in the camera. For example, the processing element may automatically determine values of exposure parameters of the camera such as an exposure time, an exposure gain, and an aperture. The processing element may also adjust quality (e.g., sharpness, contrast, noise level, etc. ) of images taken by the camera
In some embodiments, the processing element may be local or remote. For example, the processing element may communicate with the camera via a network. As another example, the processing element may be integrated into the camera.
The storage element may store data, instructions, and/or any other information. In some embodiments, the storage element may store data obtained from the processing element. For example, the storage element may store captured images. In some embodiments, the storage element may store data and/or instructions that a processing device may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage element may include a mass storage, removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination  thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random-access memory (RAM) . Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc. Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
The media sever 103 may be a server (e.g., a computer or a group of computers) for storing or providing images or videos including a plurality of images. The media server 103 may also include an image processing element (not shown) configured to process the images using exemplary methods introduced in the present disclosure.
The image source 101 may send the images or videos to the processing device 104. The processing device 104 may process the images or videos. For example, the images or videos may include a symbol image. The symbol image may be an image of a symbol. For example, the symbol may be a barcode (e.g., a portable data file (PDF) 417, a code 16K, a code 49, etc. ) . The symbol may include a plurality of symbol characters in a symbol region. As used herein, the symbol character may refer to a minimum unit for encoding data in the symbol. The symbol region may refer to a region corresponding to at least a portion of the symbol where the plurality of symbol characters are located. The processing device 104 may decode the symbol and generate decoded data corresponding to the symbol.
In some embodiments, the processing device 104 may be a single server or a server group. The server group may be centralized or distributed. In some embodiments, the processing device 104 may be local or remote. In some embodiments, the processing device 104 may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public  cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. In some embodiments, the processing device 104 may be implemented by a computing device 200 having one or more components as illustrated in FIG. 2.
The images, videos, and/or decoded data corresponding to the images or the videos may be stored in the buffer 106. The buffer 106 may be managed by the buffer manager 105. The buffer 106 may be a storage device for buffering the images, videos, and/or decoded data corresponding to the images or the videos to be transmitted through the network 112. It may be a remote device from the image source 101 or a local device interpreted in the image source 101, such as the storage medium of the camera. The buffer 106 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random-access memory (RAM) , such as a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) . Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
The transmitter 107 may transmit the images, videos, and/or decoded data corresponding to the images or the videos buffered in the buffer 106 to the network 112. The transmitter 107 may transmit the images, videos, and/or decoded data corresponding to the images or the videos in response to instructions sent from the video source 101, the buffer manager 105, the terminal 108, or the like, or a combination thereof. Alternatively or additionally, the transmitter 107 may spontaneously transmit the images, videos, and/or decoded data corresponding to the images or the videos stored in the buffer 106. The transmitter 107 may transmit  the images, videos, and/or decoded data corresponding to the images or the videos to the terminal 108 through the network 112.
The terminal 108 may receive the transmitted images, videos, and/or decoded data corresponding to the images or the videos through the network 112. In some embodiments, the terminal 108 may display the images, videos, and/or decoded data corresponding to the images or the videos to a user or perform further operations such as payment, identity authentication, registration, etc.
The terminal 108 may be various in forms. For example, the terminal 108 may include a mobile device 109, a tablet computer 110, a laptop computer 111, or the like, or any combination thereof. In some embodiments, the mobile device 109 may include, a wearable device, a mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the wearable device may include a bracelet, footgear, eyeglasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the mobile device may include a mobile phone, a personal digital assistance (PDA) , a laptop, a tablet computer, a desktop, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include a Google Glass TM, an Oculus Rift TM, a Hololens TM, a Gear VR TM, etc. In some embodiments, the terminal (s) 108 may be part of a processing engine.
The network 112 may include any suitable network that can facilitate data transmission. The network 112 may be and/or include a public network (e.g., the Internet) , a private network (e.g., a local area network (LAN) , a wide area network (WAN) ) , a wired network (e.g., an Ethernet network) , a wireless network (e.g., an 802.11 network, a Wi-Fi network) , a cellular network (e.g., a Long Term Evolution (LTE) network) , a frame relay network, a virtual private network ( "VPN" ) , a satellite  network, a telephone network, routers, hubs, switches, server computers, and/or any combination thereof. Merely by way of example, the network 112 may include a cable network, a wireline network, a fiber-optic network, a telecommunications network, an intranet, a wireless local area network (WLAN) , a metropolitan area network (MAN) , a public telephone switched network (PSTN) , a Bluetooth TM network, a ZigBee TM network, a near field communication (NFC) network, or the like, or any combination thereof. In some embodiments, the network 112 may include one or more network access points.
In some embodiments, the network 112 may include one or more network storage devices 113. The network storage device 113 may be a device for buffering or caching data transmitted in the network 112. The images, videos, and/or decoded data corresponding to the images or the videos transmitted by the transmitter 107 may be buffered or cashed in one or more network storage devices 113 before being received by the terminal 108. The network storage device 113 may be a server, a hub, a gateway, or the like, or a combination thereof.
It may be noted that, one or more of the processing device 104, buffer manager 105, buffer 106, and transmitter 107 may be a stand-alone device, or a module integrated into the image source 101 or another stand-alone device. For example, one or more of the processing device 104, buffer manager 105, buffer 106 and transmitter 107 may be integrated into the scanner 102, the media server 103, and/or the terminal 108. As another example, the processing device 104, buffer manager 105, buffer 106 and transmitter 107 may be included in a video processing engine which may communicate with the image source 101 through direct wired connection, the network 112, or another network. As a further example, the processing device 104 may be a stand-alone device (e.g., a computer or a server) , while the buffer manager 105, buffer 106 and transmitter 107 may be included in another stand-alone device.
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure. For example, the computing device 200  may be the media server 103, the processing element of the scanner 102, and/or an electronic device specialized in image processing. The processing device 104 and buffer manager 105 may also be implemented on the computing device 200. As illustrated in FIG. 2, the computing device 200 may include a processor 222, a storage 227, an input/output (I/O) 226, and a communication port 225.
The processor 222 may execute computer instructions (e.g., program code) and perform functions in accordance with techniques described herein. For example, the processor 222 may include interface circuits and processing circuits therein. The interface circuits may be configured to receive electronic signals from a bus (not shown in FIG. 2) , wherein the electronic signals encode structured data and/or instructions for the processing circuits to process. The processing circuits may conduct logical operations calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus.
The computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein. In some embodiments, the processor 222 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC) , an application specific integrated circuits (ASICs) , an application-specific instruction-set processor (ASIP) , a central processing unit (CPU) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field programmable gate array (FPGA) , an advanced RISC machine (ARM) , a programmable logic device (PLD) , any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof.
Merely for illustration, only one processor is described in the computing device 200. However, it should be noted that the computing device 200 in the present disclosure may also include multiple processors, thus operations and/or method steps that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors.  For example, if in the present disclosure the processor of the computing device 200 executes both step A and step B, it should be understood that step A and step B may also be performed by two or more different processors jointly or separately in the computing device 200 (e.g., a first processor executes step A and a second processor executes step B, or the first and second processors jointly execute steps A and B) .
The storage 227 may store data/information obtained from the image source 101, the processing device 104, the buffer manager 105, the buffer 106, the transmitter 107, the terminal 108, the network 112, the network storage device 113, and/or any other component of the image processing system 100. In some embodiments, the storage 222 may include a mass storage, removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. For example, the mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. The removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. The volatile read-and-write memory may include a random-access memory (RAM) , which may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc. The ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc. In some embodiments, the storage 222 may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure. For example, the storage 222 may store a program for the processing engine (e.g., the media server 103, the processing device 104) to decode a symbol image.
The I/O 226 may input and/or output signals, data, information, etc. In some embodiments, the I/O 226 may include an input device and an output device. Examples of the input device may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Examples of the output device  may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Examples of the display device may include a liquid crystal display (LCD) , a light-emitting diode (LED) -based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT) , a touch screen, or the like, or a combination thereof.
The communication port 225 may be connected to a network (e.g., the network 112) to facilitate data communications. The communication port 225 may establish connections between the image source 101, the processing device 104, the buffer manager 105, the buffer 106, the transmitter 107, the terminal 108, the network 112, the network storage device 113, and/or any other component of the image processing system 100. The connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections. The wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof. The wireless connection may include, for example, a Bluetooth TM link, a Wi-Fi TM link, a WiMax TM link, a WLAN link, a ZigBee link, a mobile network link (e.g., 3G, 4G, 5G) , or the like, or a combination thereof. In some embodiments, the communication port 225 may be and/or include a standardized communication port, such as RS232, RS485, etc. In some embodiments, the communication port 225 may be a specially designed communication port.
FIG. 3 is a schematic diagram illustrating exemplary components of an exemplary terminal according to some embodiments of the present disclosure. As illustrated in FIG. 3, the terminal 300 may include a communication platform 320, a display 310, a graphic processing unit (GPU) 330, a central processing unit (CPU) 330, an I/O port 350, a memory 360, and a storage 390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown) , may also be included in the terminal 300. In some embodiments, a mobile operating system 370 (e.g., iOS TM, Android TM, Windows Phone TM) and one or more applications 380 may be loaded into the memory 360  from the storage 390 in order to be executed by the processor 340. The terminal 300 may be an embodiment of the terminal 108.
To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device. A computer may also act as a server if appropriately programmed.
FIG. 4 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure. As illustrated in FIG. 4, the processing device 104 may include an obtaining module 410, a row line determination module 420, a character region determination module 430, and a decoding module 440.
The obtaining module 410 may obtain data/information. In some embodiments, the obtaining module 410 may obtain a symbol image of a symbol including a plurality of symbol characters. In some embodiments, the symbol in the symbol image may be a barcode. Exemplary barcodes may include code 16K, code 49, PDF 417, micro PDF 417, code one, maxicode, quick response (QR) code, data matrix, HanXin code, grid matrix, etc. In some embodiments, the obtained data/information may further include processed results, user instructions, algorithms, program codes, or the like, or a combination thereof.
The row line determination module 420 may determine a plurality of row lines in the symbol image along a length direction of the symbol. In some embodiments, the processing device 104 may identify edges of the symbol. The edges of the symbol may be identified based on the pixels in the symbol image. In some embodiments, the processing device 104 may adjust the size, position, and/or orientation of a positioning box dynamically until edges of the symbol coincide with or substantially coincide with edges of the positioning box. In some embodiments, each of the plurality of row lines may be determined by connecting pixels in a same row in the positioning box.
The character region determination module 430 may determine a character region corresponding to the symbol character for each of the plurality of symbol characters. In some embodiments, the character region corresponding to the symbol character for each of the plurality of symbol characters may be determined based on the plurality of column boundaries and the plurality of row boundaries in a symbol region including the plurality of symbol characters. The character region determination module 430 may obtain the plurality of row lines from the row line determination module 420, and determine the plurality of column boundaries and the plurality of row boundaries based on the plurality of row lines. Details regarding the determination of the plurality of column boundaries and the plurality of row boundaries can be found elsewhere in the present disclosure, for example, FIGs. 5, 12, and 15, and relevant descriptions thereof.
The decoding module 440 may decode each symbol character based on grey values associated with the character region corresponding to the symbol character.
As for each character region, the decoding module 440 may divide the character region into a plurality of blocks. Each of the plurality of blocks may include one or more pixels (e.g., 1×4 pixels (i.e., one pixel in a row and 4 pixels in a column) ) . Grey values of the one or more pixels of the block may be obtained. The global gray value of the block may be an overall representation of the grey values of the one or more pixels of the block.
In some embodiments, the decoding module 440 may determine a contrast value of the symbol character corresponding to the character region based on the global grey values of the plurality of blocks. The decoding module 440 may determine a codeword corresponding to the symbol character based on the contrast value. In some embodiments, the decoding module 440 may obtain a plurality of preset codewords (e.g., 2787 codewords) . Each of the plurality of codewords may correspond to a predetermined codeword string. A reference contrast value of each of the plurality of preset codewords may be determined based on the corresponding predetermined codeword string. Then the processing device 104 may determine a  similarity value between each of the plurality of preset codewords and the symbol character based on reference contrast value of each codeword and the contrast value of the symbol character. The decoding module 440 may determine the codeword corresponding to the symbol character based on the similarity values. The codeword corresponding to the symbol character may be or include decoded data (e.g., numbers, text, vectors, etc. ) corresponding to the symbol in the symbol image.
The modules in the processing device 104 may be connected to or communicated with each other via a wired connection or a wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof. The wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof. Two or more of the modules may be combined into a single module, and any one of the modules may be divided into two or more units. For example, the processing device 104 may include a storage module (not shown) configured to store information and/or data (e.g., scanning data, images) associated with the above-mentioned modules.
FIG. 5 is a flow chart illustrating an exemplary process for decoding symbol characters of a symbol in a symbol image according to some embodiments of the present disclosure. In some embodiments, the process 500 may be implemented on the image processing system 100 as illustrated in FIG. 1. For example, the process 500 may be stored in a storage medium (e.g., the network storage device 113, or the storage 227 of the computing device 220) as a form of instructions, and invoked and/or executed by the processing device 104. The operations in the process 500 presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 500 as illustrated in FIG. 5 and described below may not be intended to be limiting.
In 510, the processing device 104 (e.g., the processor 222, the obtaining module 410) may obtain a symbol image including a plurality of symbol characters in a symbol region.
The symbol image may be an image of a symbol. In some embodiments, the symbol image may be obtained from the image source 101 (e.g., the scanner 102 or the media server 103) of the image processing system 100. In some embodiments, the symbol in the symbol image may be a barcode. Exemplary barcodes may include code 16K, code 49, PDF 417, micro PDF 417, code one, maxicode, quick response (QR) code, data matrix, HanXin code, grid matrix, etc. The following descriptions are provided, unless otherwise stated expressly, with reference to a PDF 417 barcode for illustration and not intended to be limiting.
The PDF 417 barcode may be a symbol with a stacked linear barcode format. Merely for illustration, an exemplary PDF 417 barcode is illustrated in FIGs. 6 and 7. As shown in FIG. 6, the PDF 417 barcode may include a start symbol character (also referred to as start pattern) A, a symbol region B, and an end symbol character (also referred to as stop pattern) C. The start symbol character A, the symbol region B, and the end symbol character C may be arranged sequentially along a length direction of the symbol (e.g., as illustrated in FIG. 6) . The symbol region B may be include four boundaries. The four boundaries may include a start boundary (i.e., the boundary that separate the symbol region B from the start symbol character A) , an end boundary (i.e., the boundary that separate the symbol region B from the end symbol character C) , an upper boundary, and a lower boundary. The PDF417 barcode may include a plurality of symbol characters in the symbol region B. Each of the plurality of symbol characters in the symbol region may be represented by a combination of four bars and four spaces. The four bars and four spaces may be arranged alternatively. In some embodiments, the four bars and the four spaces may be constituted by 17 blocks. Each of the four bars and four spaces may correspond to a specific number or count of blocks along the length direction of the symbol. The arrangement of the specific number or count of blocks corresponding to each of the bars and spaces along the length direction of the  symbol may be represented as a codeword string corresponding to the symbol character. Data encoded in the symbol character may be decoded based on the codeword string.
Merely for illustration purposes, an exemplary symbol character is provided in FIG. 7. As illustrated in FIG. 7, the symbol character 700 may be in a dotted box. The symbol character 700 may include four bars and four spaces. The four bars and the four spaces may be represented by the black portions and the white portions in the dotted box, respectively.
In some embodiments, the four bars and the four spaces may be constituted by 17 blocks, which are labeled with numbers 1 through 17 above the symbol character 700 as illustrated in FIG. 7. The 17 blocks may have a same width along the length direction of the symbol (e.g., the horizontal direction in FIG. 7) . Each block may have a shape of a stripe. A length direction of each stripe may be perpendicular to the length direction of the symbol (e.g., a width direction of the symbol) .
As for the symbol character 700, a first bar corresponds to 5 blocks, a first space corresponds to 1 block, a second bar corresponds to 1 blocks, a second space corresponds to 1 block, a third bar corresponds to 1 blocks, a third space corresponds to 1 block, a fourth bar corresponds to 2 blocks, and a fourth space corresponds to 5 block. Illustratively, a width of each of the four bars and spaces may be labeled blow the corresponding bar or space in FIG. 7. In this case, the symbol character 700 represented by the bars and spaces may be represented by a codeword string 51111125. Data encoded in the symbol character 700 (e.g., a number, text, etc. ) may be decoded based on the codeword string 51111125.
Each block in the symbol image may correspond to one or more pixels of the symbol image. Merely for a better understanding of the present disclosure, each block may be represented by 1×4 pixels (i.e., one pixel in a row and 4 pixels in a column as illustrated in FIG. 9) in the symbol image. A direction of the row (also referred to as row direction) may be parallel to the length direction of the symbol. A direction of the column (also referred to as column direction) may be parallel to the  width direction of the symbol. It should be noted that this is for illustration purposes, and not intended to be limiting. A block may correspond to any number or count of pixels, such as 1×3 pixels (i.e., one pixel in a row and 3 pixels in a column) , 2×7 pixels (i.e., 2 pixel in a row and 7 pixels in a column) , etc.
The PDF 417 barcode may further include a first quiet zone and a second quiet zone (not shown in the figures) . The first quiet zone and the second quiet zone may be located at both sides of the symbol along its length direction. The first quiet zone and/or the second quiet zone may be spaces (i.e., white portions) having preset widths.
In 520, the processing device 104 (e.g., the processor 222, the row line determination module 420) may determine a plurality of row lines along the length direction of the symbol.
In some embodiments, upon obtaining the symbol image, the processing device 104 may determine the length direction of the symbol using a positioning box. In some embodiments, the positioning box may be a rectangular box. In some embodiments, the processing device 104 may identify edges of the symbol. The edges of the symbol may be identified based on the pixels in the symbol image. For example, the processing device 104 may obtain grey values of all the pixels in the symbol image. Since colors of the bars of the start symbol character, the end symbol character, and the plurality of symbol characters in the symbol are black, grey values of pixels corresponding to the bars may be relatively small (e.g., 0) . The processing device 104 may identify the edges of the symbol based on the grey values of the pixels. The processing device 104 may adjust a size, a position, and/or an orientation of the positioning box based on the identified edges of the symbol. The orientation of the positioning box may be parallel to the length direction of the positioning box. In some embodiments, the processing device 104 may adjust the size, position, and/or orientation of the positioning box dynamically until edges of the symbol coincide with or substantially coincide with edges of the positioning box. The orientation of the positioning box may be determined as the length direction of the symbol.
After the length direction of the symbol is determined, the processing device 104 may determine a plurality of row lines. In some embodiments, pixels in the positioning box may be in a first plurality of rows and a second plurality of columns. A row direction of each of the first plurality of rows may be parallel to the length direction of the symbol. A column direction of each of the second plurality of columns may be perpendicular to the length direction of the symbol. In some embodiments, each of the plurality of row lines may be determined by connecting pixels in a same row in the positioning box. Alternatively, multiple scan lines may be determined. In some embodiments, the multiple scan lines may be parallel to the length direction of the symbol. Each scan line may traverse pixels in a same row in the positioning box. The multiple scan lines may be determined as the row lines. In some embodiments, at least one of the plurality of row lines may be a straight line. In some embodiments, at least one of the plurality of row lines may be a curved line. Each two neighboring row lines of the plurality of row lines may have a same distance or different distances.
Merely for illustration purposes, exemplary row lines in a symbol image may be illustrated in FIG. 8. The symbol image 800 may include a symbol 810. Pixels in the symbol image 800 may be in eight rows. Accordingly, eight row lines may be determined by connecting pixels in a same row. As illustrated in FIG. 8, eight row lines L1-L8 may be determined. L1 through L8 may traverse pixels in different rows of the symbol 810.
In 530, the processing device 104 (e.g., the processor 222, the character region determination module 430) may determine, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters.
As for the PDF 417 barcode, the plurality of symbol characters in the symbol region may be in a stacked arrangement. For example, the plurality of symbol characters may be arranged in a multiple rows and columns in the symbol region. A column boundary may refer to a boundary between two adjacent columns of symbol characters.
The processing device 104 may determine a plurality of column boundaries among the plurality of symbol characters. Each of the plurality of column boundaries may correspond to two adjacent columns of symbol characters of the plurality of symbol characters. In some embodiments, the plurality of column boundaries may be determined based on the plurality of row lines. In some embodiments, the plurality of column boundaries may be determined by identifying pixels on column boundaries corresponding to two adjacent columns of the plurality of symbol characters from the plurality of row lines.
In some embodiments, the processing device 104 may determine a width of a reference symbol character associated with the plurality of symbol characters. The reference symbol character may refer to a specific symbol character in the symbol. The reference symbol character may be used to determine a width of a symbol character in the symbol region. In some embodiments, the reference symbol character may be a start symbol character (e.g., the start symbol character A as illustrated in FIG. 6) or an end symbol character (e.g., the end symbol character C as illustrated in FIG. 6) . As for the PDF 417 barcode, the plurality of symbol characters in the symbol region may have a same width. A width of the start symbol character may be equal to a width of a symbol character in the symbol region. A width of the end symbol character may be proportional to a width of a symbol character in the symbol region.
In some embodiments, the processing device 104 may identify the start symbol character and the end symbol character from the symbol image. The start symbol character and the end symbol character may be out of the symbol region. The start symbol character may include a start edge and an end edge along the row direction of the symbol. As for the PDF 417 barcode, the end edge of the start symbol character may coincide with the start boundary of the symbol region. The processing device 104 may determine the start boundary of the symbol region by determining the end edge of the start symbol character. The end symbol character may also include a start edge and an end edge along the row direction of the symbol. The start edge of the end symbol character may coincide with the end  boundary of the symbol region. The processing device 104 may determine the end boundary of the symbol region by determining the start edge of the end symbol character. The plurality of column boundaries may be determined between the start boundary and the end boundary of the symbol region.
In some embodiments, the processing device 104 may determine a reference width range based on the width of the reference symbol character. Theoretically, as for the PDF 417 barcode, the width of the start symbol character may be equal to the width of each symbol character in the symbol region. The width of the end symbol character may be proportional to the width of each symbol character in the symbol region. However, a width of a symbol character may vary in actual situations due to various factors, for example, distortion of the symbol image. Accordingly, the processing device 104 may determine the reference width range with the various factors taken into consideration. The reference width range may be a range in which a width of a symbol character of the plurality of characters may vary. In some embodiments, the reference width range may be determined by increasing the width of the reference symbol character by an increment (e.g., 0.2 millimeters, 0.5 millimeters, 1 millimeter, 1 pixel, 2 pixels) and/or decreasing the width of the reference symbol character by a decrement (e.g., 0.2 millimeters, 0.5 millimeters, 1 millimeter, 1 pixel, 2 pixels) . The increment and/or the decrement may be defined as an error corresponding to the symbol character. In some embodiments, the reference width range may be adjustable under different situations, such as different brightness conditions, different image resolutions, different image qualities (e.g., noise levels) , etc.
The processing device 104 may determine the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range. As for the PDF 417 barcode, a fourth space of a current symbol character may be consecutive to a first bar of an adjacent symbol character along the row direction of the symbol. A column boundary between the two adjacent symbol characters in the row direction may have characteristics (also referred to as column boundary characteristics) that  the color of a last block in the fourth space of the current symbol character and the color of a first block in the first bar of the adjacent symbol character may change from white to black along the row direction. Accordingly, grey values of pixels of the blocks may decrease, e.g., from 255 to 0, along the row direction. More descriptions regarding the determination of the plurality of column boundaries can be found elsewhere in the present disclosure, for example, FIG. 12 and the descriptions thereof.
In 540, the processing device 104 (e.g., the processor 222, the character region determination module 430) may determine, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters.
For each of the plurality of column boundaries, the processing device 104 may determine a plurality of intersections (e.g., pixels) of the plurality of row lines and the column boundary. The processing device 104 may perform an upward traverse and identify an upper intersection among the plurality of intersections. The upward traverse may refer to an operation for traversing intersections one by one upwards along a column boundary. A pixel above the upper intersection (also referred to as upper pixel) along the column boundary and a pixel subsequent to the upper pixel along the row direction (also referred to as subsequent pixel of the upper pixel) in a same row may be determined. The upper pixel and the subsequent pixel of the upper pixel may be located at different sides of the column boundary.
As for the PDF 417 barcode, an upper boundary of the symbol region may have characteristics (also referred to as upper boundary characteristics) that an upper pixel and a pixel subsequent to the upper pixel along the row direction do not satisfy the column boundary characteristics, and a reference pixel (e.g., an intersection) blow the upper pixel along the column boundary and a pixel subsequent to the reference pixel along the row direction satisfy the column boundary characteristics. The reference pixel and the pixel subsequent to the reference point may be on the upper boundary of the symbol region.
The processing device 104 may determine whether the upper intersection and the subsequent pixel of the upper intersection, and upper pixel and the  subsequent pixel of the upper pixel satisfy the column boundary characteristics (i.e., the color of the upper pixel is white and the color of the subsequent pixel of the upper pixel is black, the colors of the two pixels may change from white to black along the row direction) . If the upper pixel and the subsequent pixel of the upper pixel do not satisfy the column boundary characteristics and the upper intersection and a pixel subsequent to the upper intersection along the row direction satisfy the column boundary characteristics (also referred to as the upper pixel and the upper intersection satisfy the upper boundary characteristics) , it may indicate that the upper intersection and the subsequent intersection of the upper pixel may be on the upper boundary of the symbol region.
In some embodiments, the RANSAN algorithm may be used to determine the upper boundary of the symbol region based on the upper pixel and the subsequent pixel of the upper pixel corresponding to each of the plurality of column boundaries.
Similarly, for each of the plurality of column boundaries, the processing device 104 may perform a downward traverse and identify a lower intersection among the plurality of intersections. The downward traverse may refer to an operation for traversing intersections one by one downwards along the column boundary. A pixel beneath the lower intersection (also referred to as lower pixel) along the column boundary and a pixel subsequent to the lower pixel along the row direction (also referred to as subsequent pixel of the lower pixel) in a same row may be determined. The lower pixel and the subsequent pixel of the lower pixel may be located at different sides of the column boundary.
As for the PDF 417 barcode, a lower boundary of the symbol region may have characteristics (also referred to as lower boundary characteristics) that a lower pixel and a pixel subsequent to the lower pixel along the row direction do not satisfy the column boundary characteristics, and a reference pixel (e.g., an intersection) above the lower pixel along the column boundary and a pixel subsequent to the reference pixel along the row direction satisfy the column boundary characteristics.  The reference pixel and the pixel subsequent to the reference point may be on the lower boundary of the symbol region.
The processing device 104 may determine whether the lower intersection and the subsequent pixel of the lower intersection, and the lower pixel and the subsequent pixel of the lower pixel satisfy the column boundary characteristics. If the lower pixel and the subsequent pixel of the lower pixel do not satisfy the column boundary characteristics and the low intersection and a pixel subsequent to the lower intersection along the row direction satisfy the column boundary characteristics (also referred to as the lower pixel and the lower intersection satisfy the lower boundary characteristics) , it may indicate that the lower intersection and the subsequent pixel of the lower intersection may be on the lower boundary of the symbol region.
In some embodiments, the RANSAN algorithm may be used to determine the lower boundary of the symbol region based on the lower pixel and the subsequent pixel of the lower pixel corresponding to each of the plurality of column boundaries.
After the upper boundary and the lower boundary of the symbol region are determined, the plurality of row boundaries between the upper boundary and the lower boundary of the symbol region may be determined. As used herein, a row boundary may refer to a boundary between two adjacent rows of symbol characters.
The processing device 104 may determine a plurality of row boundaries among the plurality of symbol characters. Each of the plurality of row boundaries may correspond to two adjacent rows of symbol characters of the plurality of symbol characters. In some embodiments, the plurality of row boundaries may be determined based on the plurality of row lines. In some embodiments, the plurality of row boundaries may be determined by identifying the plurality of row boundaries from the plurality of row lines based on boundary characteristics between adjacent symbol characters.
In some embodiments, widths of bars and spaces between two adjacent symbol characters in the column direction may be different. Accordingly, a row  boundary between the two adjacent symbol characters in the column direction may have characteristics (also referred to as row boundary characteristics) that the colors of all the pixels on the row boundary may be the same as the colors of corresponding pixels on a first adjacent row line of the row boundary, and the color of at least one pixel on the row boundary may be different from the color of corresponding pixel on a second adjacent row line of the row boundary. In some embodiments, the first adjacent row line may be above the row boundary, and the second adjacent row line may be blow the row boundary. Alternatively, the first adjacent row line may be below the row boundary, and the second adjacent row line may be above the row boundary. In this case, grey values of all the pixels on the row boundary may be the same as or close to grey values of corresponding pixels on the first adjacent row line of the row boundary, and the grey value of at least one pixel on the row boundary may be different from the grey value of corresponding pixel on the second adjacent row line of the row boundary.
FIG. 10 is a schematic diagram illustrating an exemplary row boundary between two adjacent symbol characters according to some embodiments of the present disclosure. As shown in FIG. 10, a symbol character P may be represented as 41111144. As for the symbol character P, row lines L1 -L4, each connecting pixels in a same row, may be determined. A symbol character Q may be represented as 41111315. As for the symbol character Q, row lines L5 -L8, each connecting pixels in a same row, may be determined. As for the row line L4, grey values of all the pixels on the row line L3 may be the same or close to grey values of corresponding pixels on the row line L4. The grey value of at least one pixel (e.g., pixels in the rectangular boxes as shown in FIG. 10) on the row line L5 may be different from the grey value of corresponding pixel on the row line L4. The pixels on the row line L4 may satisfy the row boundary characteristics. Thus, the row line L4 may be determined as the row boundary between the symbol character P and the symbol character Q.
In some embodiments, the number or count of pixels in the symbol image may be relatively large (i.e., the symbol image may be a high resolution (e.g., 7680×4320 pixels) ) . In some embodiments, instead of traversing all the pixels on each row line, the processing device 104 may sample a particular number or count of pixels on each row line from all the pixels on each row line, and determine the plurality of row boundaries among the plurality of symbol characters based on the sampled pixels on each row line. In some embodiments, instead of traversing all the plurality of row lines, the processing device 104 may sample a particular number or count of row lines from the plurality of row lines, and determine the plurality of row boundaries among the plurality of symbol characters based on the sampled row lines.
In 550, the processing device 104 (e.g., the processor 222, the character region determination module 430) may determine a character region corresponding to the symbol character for each of the plurality of symbol characters based on the plurality of column boundaries and the plurality of row boundaries.
Since the start boundary, the end boundary, the upper boundary, and the lower boundary of the symbol region are determined, the plurality of column boundaries and the plurality of row boundaries may divide the symbol region into a plurality of character regions. Each of the plurality of character regions may correspond to one of the plurality of symbol characters.
In 560, the processing device 104 (e.g., the processor 222, the decoding module 440) may decode each symbol character based on grey values associated with the character region corresponding to the symbol character.
As for each character region, the processing device 104 may divide the character region into a plurality of blocks. In some embodiments, the character region may be divided into the plurality of blocks (e.g., 17 blocks) along the row direction. The processing device 104 may determine a global gray value of each of the plurality of blocks. Each of the plurality of blocks may include one or more pixels (e.g., 1×4 pixels (i.e., one pixel in a row and 4 pixels in a column) ) . Grey values of the one or more pixels of the block may be obtained. The global gray  value of the block may be an overall representation of the grey values of the one or more pixels of the block. In some embodiment, the global gray value of the block may be a mean value of gray values of the one or more pixels of the block.
In some embodiments, the processing device 104 may determine a contrast value of the symbol character corresponding to the character region based on the global grey values of the plurality of blocks. In some embodiments, the processing device 104 may identify blocks of a first type (e.g., blocks belonging to spaces of the symbol character) and blocks of a second type (e.g., blocks belonging to bars of the symbol character) from the plurality of blocks in the character region. The contrast value of the symbol character may be determined based at least on the global grey values of the blocks of the first type and the global grey values of the blocks of the second type.
After the contrast value of the symbol character corresponding to the character region is determined, the processing device 104 may determine a codeword corresponding to the symbol character based on the contrast value. In some embodiments, the processing device 104 may obtain a plurality of preset codewords (e.g., 2787 codewords) . Each of the plurality of codewords may correspond to a predetermined codeword string. A reference contrast value of each of the plurality of preset codewords may be determined based on the corresponding predetermined codeword string. Then the processing device 104 may determine a similarity value between each of the plurality of preset codewords and the symbol character based on reference contrast value of each codeword and the contrast value of the symbol character. The processing device 104 may determine the codeword corresponding to the symbol character based on the similarity values. The codeword corresponding to the symbol character may be or include decoded data (e.g., numbers, text, vectors, etc. ) corresponding to the symbol in the symbol image. Details regarding the decoding of a symbol character can be found elsewhere in the present disclosure, for example, FIG. 14 and the descriptions thereof.
In some embodiments, the decoded data may be transmitted by the transmitter 107 to the terminal 108 through the network 112. The terminal 108 may receive the decoded data corresponding to the symbol in the symbol image through the network 112, and display the decoded data to a user or perform further operations such as payment, identity authentication, registration, etc.
According to some embodiments of the present disclosure, the processing device 104 may determine the plurality of column boundaries among the plurality of symbol characters and the plurality of row boundaries among the plurality of symbol characters based on the row lines, the column boundary characteristics, and the row boundary characteristics. The symbol region may be segmented into a plurality of character regions based on the plurality of column boundaries among the plurality of symbol characters and the plurality of row boundaries among the plurality of symbol characters. Then the processing device 104 may decode the symbol character corresponding to each of the plurality of character regions based on grey values associated with the character region corresponding to the symbol character. In this case, the position of each of the plurality of symbol characters may be determined more accurately, the codeword corresponding to the symbol character may be determined more efficiently and accurately based on the plurality of preset codewords, and errors in the decoding process caused by factors such as image distortion, uneven light, etc., may be reduced or eliminated, thus improving the effectiveness and accuracy of the decoding process.
It should be noted that the above description is merely provided for the purposes of illustration, not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
In some embodiments, the upper boundary and the lower boundary of the symbol region may be determined in various ways. As illustrated in FIG. 11, a symbol image 1100 including a symbol 1110 is provided. A plurality of scan lines L 1 -L n may be determined in the symbol image 1100. Each two neighboring scan  lines of the plurality of scan lines L 1 -L n may have a same distance or different distances. The plurality of scan lines L 1 -L n may be parallel to the length direction of the symbol 1110. Each of the plurality of scan lines L 1 -L n may traverse pixels in a same row in a symbol region ABCD. The processing device 104 may determine a plurality of column boundaries S 1 –S n among a plurality of symbol characters in the symbol region ABCD. In some embodiments, the plurality of column boundaries S 1 –S n may be determined based on the plurality of scan lines L 1 -L n.
For each of the plurality of column boundaries S 1 –S n, the processing device 104 may determine a plurality of intersections of the plurality of scan lines and the column boundary. The processing device 104 may identify an intersection at the upmost position among the plurality of intersections (also referred to as upper intersection) . The processing device 104 may further perform an upward traverse from the intersection at the upmost position until an upper point is identified. The upward traverse may be an operation for traversing positions above the intersection at the upmost position one by one upwards along the column boundary. As for the PDF 417 barcode, an upper boundary L u of the symbol region ABCD may have characteristics (also referred to as upper boundary characteristics) that the upper point and a point subsequent to the upper point along the row direction do not satisfy the column boundary characteristics, and a reference point blow the upper point along the column boundary and a point subsequent to the reference point along the row direction satisfy the column boundary characteristics. The reference point and the point subsequent to the reference point may be on the upper boundary L u of the symbol region ABCD. In some embodiments, the upper point, the reference point, the point subsequent to the upper point, and/or the point subsequent to the reference point may be different from the intersection at the upmost position. In some embodiments, the upper point, the reference point, the point subsequent to the upper point, and/or the point subsequent to the reference point may be pixels.
According to the upper boundary characteristics, the upper point and a point subsequent to the upper point along the row direction (also referred to as subsequent point of the upper point) may be determined. The upper point and the  subsequent point of the upper point may be located at different sides of the column boundary. The processing device 104 may determine whether the upper point and the subsequent point of the upper point, and a reference point below the upper point and the subsequent point of the reference point satisfy the column boundary characteristics. If the upper point and the subsequent point of the upper point, and a reference point below the upper point and the subsequent point of the reference point satisfy the column boundary characteristics (i.e., the upper point and the reference point satisfy the upper boundary characteristics) , it may indicate that the reference point and the subsequent point of the reference point may be on the upper boundary L u of the symbol region ABCD. In some embodiments, the upper boundary L u of the symbol region ABCD may be determined based on a plurality of reference points corresponding to the plurality of column boundaries S 1 –S n. In some embodiments, the RANSAN algorithm may be used to determine the upper boundary L u of the symbol region ABCD based on the plurality of reference points. Similarly, the lower boundary L d of the symbol region ABCD may be determined.
FIG. 12 is a flow chart illustrating an exemplary process for determining a plurality of column boundaries among a plurality of symbol characters in a symbol region of a symbol according to some embodiments of the present disclosure. In some embodiments, the process 1200 may be implemented on the image processing system 100 as illustrated in FIG. 1. For example, the process 1200 may be stored in a storage medium (e.g., the network storage device 113, or the storage 227 of the computing device 220) as a form of instructions, and invoked and/or executed by the processing device 104. The operations in the process 1200 presented below are intended to be illustrative. In some embodiments, the process 1200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1200 as illustrated in FIG. 12 and described below may not be intended to be limiting.
In 1210, the processing device 104 may determine a width of a reference symbol character associated with the plurality of symbol characters.
In some embodiments, the reference symbol character may be a start symbol character or an end symbol character. As for the PDF 417 barcode, the plurality of symbol characters in the symbol region may have a same width. A width of the start symbol character may be equal to a width of a symbol character in the symbol region. A width of the end symbol character may be proportional to a width of a symbol character in the symbol region.
In some embodiments, the processing device 104 may obtain a codeword string corresponding to the reference symbol character. The codeword string corresponding to the reference symbol character may also be referred to as predetermined codeword string. In some embodiments, the predetermined codeword string may be a standard codeword string. In some other embodiments, the predetermined codeword string corresponding to the reference symbol character may be determined by a user, according to default settings of the image processing system 100, etc.
In some embodiments, the processing device 104 may determine the arrangement and/or widths of bars and spaces of the reference symbol character based on the predetermined codeword string. In some embodiments, the processing device 104 may further determine grey values of at least a part of pixels corresponding to the bars and spaces of the reference symbol character. Merely by way of example, the at least a part of pixels may be, for example, pixels in at least one row corresponding to the bars and spaces of the reference symbol character. The grey values of the at least a part of pixels may also be referred to as predetermined grey values.
After the predetermined grey values are determined, the processing device 104 may compare the predetermined grey values corresponding to the reference symbol character with grey values of pixels on at least one of the plurality of row lines. Based on the comparison, the processing device 104 may identify at least one line segment on the at least one row line. Grey values of pixels of the at least one line segment may match to the predetermined grey values. The processing  device 104 may designate a length of the at least one line segment as the width of the reference symbol character.
As for the PDF 417 barcode, a predetermined codeword string corresponding to the start symbol character (also referred to as start codeword string) may be 81111113. Merely for illustration, each of the 17 blocks constituting a symbol character is represented by one pixel in the symbol image. The pixels may be represented by the black and white dots as illustrated in FIG. 13. A start symbol character 1310 may include 8 pixels as a first bar, 1 pixel as a first space, 1 pixel as a second bar, 1 pixel as a second space, 1 pixel as a third bar, 1 pixel as a third space, 1 pixel as a fourth bar, and 3 pixels as a fourth space. Gray values of pixels corresponding to the start symbol character 1310 (i.e., predetermined grey values) may be determined. The processing device 104 may identify at least one line segment 1030 on a row line 1320 based on the predetermined grey values. Grey values of pixels of the at least one line segment 1330 may match to the predetermined grey values. The processing device 104 may designate a length of the line segment 1320 as the width of the start symbol character. A predetermined codeword string corresponding to the end symbol character (also referred to as end codeword string) may be 711311121. The width of the end symbol character may be determined similarly.
In some embodiments, the processing device 104 may identify the reference symbol character from the symbol image when the at least one line segment is determined. Each of the line segment may include a start point and an end point. Start points of two or more line segments may form a start edge of the reference symbol character. End points of the two or more line segments may form an end edge of the reference symbol character.
Specifically, the processing device 104 may identify two or more line segments corresponding to the start symbol character on the plurality of row lines. End points of the two or more line segments corresponding to the start symbol character (also referred to as end points of the start symbol character) may form the end edge of the start symbol character. In some embodiments, a random sample  consensus (RANSAN) algorithm may be used to determine the end edge of the start symbol character based on the end points of the two or more line segments corresponding to the start symbol character. In some embodiments, the end edge of the start symbol character may be determined by fitting a line according to the RANSAN algorithm based on the end points of the two or more line segments corresponding to the start symbol character. As for the PDF 417 barcode, the start symbol character may be out of the symbol region. The end edge of the start symbol character may coincide with the start boundary of the symbol region. In this case, the start boundary of the symbol region may be determined.
Similarly, the processing device 104 may identify two or more line segments corresponding to the end symbol character on the plurality of row lines. Start points of the two or more line segments corresponding to the end symbol character (also referred to as start points of the end symbol character) may form the start edge of the end symbol character. In some embodiments, the RANSAN algorithm may be used to determine the start edge of the end symbol character based on the start points of the two or more line segments corresponding to the end symbol character. As for the PDF 417 barcode, the end symbol character may be out of the symbol region. The start edge of the end symbol character may coincide with the end boundary of the symbol region. In this case, the end boundary of the symbol region may be determined. The plurality of column boundaries may be determined between the start boundary and the end boundary of the symbol region.
In 1220, the processing device 104 may determine a reference width range based on the width of the reference symbol character.
In some embodiments, the reference width range may be determined by increasing the width of the reference symbol character by an increment (e.g., 0.2 millimeters, 0.5 millimeters, 1 millimeter, 1 pixel, 2 pixels, etc. ) and/or decreasing the width of the reference symbol character by a decrement (e.g., 0.2 millimeters, 0.5 millimeters, 1 millimeter, 1 pixel, 2 pixels, etc. ) . The increment and/or the decrement may be defined as an error corresponding to the symbol character. In some embodiments, each of the plurality of symbol characters may correspond to a  same error. In some embodiments, at least one of the plurality of symbol characters may correspond to a different error.
If the reference symbol character is the start symbol character, the processing device 104 may determine the reference width range for each of the plurality of symbol characters in the symbol region based on the width of the start symbol character and an error corresponding to the symbol character. If the reference symbol character is an end symbol character, the processing device 104 may determine the reference width range for each of the plurality of symbol characters in the symbol region based on the width of the end symbol character, a width ratio, and an error corresponding to the symbol character. The width ratio may be a ratio of a width of the symbol character to the width of the end symbol character.
FIG. 9 is a partial enlarged view of a PDF 417 barcode according to some embodiments of the present disclosure. The partial enlarged view includes two symbol characters M and N. The reference width range may be determined based on the width of the start symbol character and an error. As illustrated in FIG. 9, the error may be represented by an increment of two pixels and a decrement of two pixels shown in the parentheses 910 on each of the plurality of row lines.
In 1230, the processing device 104 may determine the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range.
Referring to FIG. 9, a column boundary S2 between the symbol character M and the symbol character N is determined based on the column boundary characteristics between adjacent symbol characters and the reference width range. The color of a last block 920 (represented by 4 pixels in a column) of the symbol character M and the color of a first block 930 (represented by 4 pixels in a column) of the symbol character N may change from white to black along the row direction. Accordingly, grey values of pixels of the  blocks  920 and 930 may decrease (e.g., from 255 to 0) along the row direction.
In some embodiments, for two or more of the plurality of row lines, the processing device 104 may identify a plurality of pixels or points (e.g., pixels, midpoints each of which being between two consecutive pixels, etc. ) on at least two of the plurality of row lines in the symbol region that satisfy the column boundary characteristics between adjacent symbol characters and the reference width range. The RANSAN algorithm may be used to determine the plurality of column boundaries based on the plurality of points. It should be noted that the RANSAN algorithm is provided for illustration purposes and not intended to be limiting. Any algorithms or models for curve fitting may be used to determine the column boundaries among the plurality of symbol characters.
Merely for illustration purposes, referring to FIG. 8, the plurality of row lines L1-L8 along the symbol direction may be curves due to distortion of the symbol image 800. In some embodiments, a plurality of pixels on the plurality of row lines L1-L8 that satisfy the column boundary characteristics between adjacent symbol characters and the reference width range may be identified. Column boundaries S1-S5 may be formed based on the plurality of identified pixels and the RANSAN algorithm.
FIG. 14 is a flow chart illustrating an exemplary process for decoding a symbol character based on grey values associated with a character region corresponding to the symbol character according to some embodiments of the present disclosure. In some embodiments, the process 1400 may be implemented on the image processing system 100 as illustrated in FIG. 1. For example, the process 1400 may be stored in a storage medium (e.g., the network storage device 113, or the storage 227 of the computing device 200) as a form of instructions, and invoked and/or executed by the processing device 104. The operations in the process 1400 presented below are intended to be illustrative. In some embodiments, the process 1400 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1400 as illustrated in FIG. 14 and described below may not be intended to be limiting. In some  embodiment, the operation 560 of the process 500 in FIG. 5 may be performed according to the process 1400.
In 1410, the processing device 104 may divide, along the row direction, the character region corresponding to the symbol character into a plurality of blocks.
In some embodiments, the processing device 104 may divide, along the row direction, the character region corresponding to the symbol character into a preset number or count of blocks equally. As for the PDF 417 barcode, the preset number or count may be 17.
In 1420, the processing device 104 may determine a global gray value of each of the plurality of blocks.
Each of the plurality of blocks may include one or more pixels. Merely by way of example, a block may be represented by 1×4 pixels (i.e., one pixel in a row and 4 pixels in a column) , 1×3 pixels (i.e., one pixel in a row and 3 pixels in a column) , 2×7 pixels (i.e., 2 pixel in a row and 7 pixels in a column) , etc. Grey values of the one or more pixels of the block may be obtained. The global gray value of the block may be an overall representation of the grey values of the one or more pixels of the block. In some embodiment, the global gray value of the block may be a mean value of gray values of the one or more pixels of the block. The mean value may be, for example, an arithmetic mean value, a harmonic mean value, a quadratic mean value, etc. In some embodiments, a grey value of a particular pixel of the block may be determined as the global gray value of the block. In some embodiments, the particular pixel may be specified by a user, according to default settings of the data imaging system 100, etc. In some embodiments, the particular pixel may be selected from the one or more pixels randomly.
In 1430, the processing device 104 may determine a contrast value of the symbol character based on the global grey values of the plurality of blocks.
In some embodiments, the processing device 104 may identify blocks of a first type and blocks of a second type from the plurality of blocks in the character region. In some embodiments, the blocks of the first type may refer to blocks belonging to spaces of the symbol character, and the blocks of the second type may  refer to blocks belonging to bars of the symbol character. In this case, the blocks of the first type may be in the white color, and the blocks of the second type may be in the black color. Alternatively, the blocks of the first type may refer to blocks belonging to bars of the symbol character, and the blocks of the second type may refer to blocks belonging to spaces of the symbol character.
In some embodiments, the processing device 104 may identify blocks of a first type and blocks of a second type from the plurality of blocks in the character region based on the global grey values of the plurality of blocks. In a case that the blocks of the first type includes blocks belonging to spaces of the symbol character, and the blocks of the second type includes blocks belonging to bars of the symbol character, the blocks of the first type may have greater global grey values (e.g., 255, 240, 230, 220, etc. ) , and the blocks of the second type may have smaller global grey values (e.g., 0, 10, 20, 30, etc. ) .
In some embodiments, the contrast value of the symbol character may be determined based at least on the global grey values of the blocks of the first type and the global grey values of the blocks of the second type. For example, the processing device 104 may determine a first ratio of grey values of blocks of the first type in the character region to a count of the blocks of the first type. Similarly, the processing device 104 may determine a second ratio of grey values of blocks of the second type in the character region to a count of the blocks of the second type. The contrast value of the symbol character may be determined based on a difference value between the first ratio and the second ratio. Merely by way of example, the contrast value of the symbol character may be determined according to Formula (1) :
Figure PCTCN2021091910-appb-000001
where contrast denotes the contrast value of the symbol character, spaceGraySum denotes a sum of global grey values of blocks in spaces of the symbol charactor, spaceNum denotes the number or count of the blocks in spaces, barGrayNum  denotes a sum of global grey values of blocks in bars of the symbol charactor, and barNum denotes the number or count of the blocks in bars.
In 1440, the processing device 104 may determine a codeword corresponding to the symbol character based on the contrast value.
In some embodiments, the processing device 104 may obtain a plurality of preset codewords (e.g., 2787 codewords) . The plurality of preset codewords may be obtained from, for example, a storage device (e.g., the network storage device 113, or the storage 227 of the computing device 228, etc. ) of the image processing system 100 or an external device (e.g., a cloud database) . In some embodiments, the plurality of preset codewords may be numbers (e.g., natural numbers) . Each of the plurality of codewords may correspond to a predetermined codeword string. Correspondence relationships between the plurality of preset codewords and the plurality of predetermined codeword strings may be provided in a matrix, a vector, a data array, a table, etc. Merely for illustration purposes, a plurality of codewords and corresponding codeword strings are provided in Table 1.
Table 1
Figure PCTCN2021091910-appb-000002
In some embodiments, a reference contrast value of each of the plurality of preset codewords may be determined. In some embodiment, the determination of the reference contrast value of each of the plurality of preset codewords may be according to Formula (1) , which is similar to or the same as the determination of the contrast value of the symbol character as described in 1430, and will not be repeated.
In some embodiments, a similarity value between each of the plurality of preset codewords and the symbol character may be determined based on reference contrast value of each codeword and the contrast value of the symbol character.  The processing device 104 may determine the codeword corresponding to the symbol character based on the similarity values. In some embodiments, the similarity values may be ranked (e.g., in an ascending order or a descending order) . The processing device 104 may identify a greatest similarity value from the determined similarity values. A preset codeword corresponding to the identified similarity value may be determined as the codeword corresponding to the symbol character.
FIG. 15 is a flow chart illustrating an exemplary process for decoding symbol characters of a symbol in a symbol image according to some embodiments of the present disclosure. In some embodiments, the process 1500 may be implemented on the image processing system 100 as illustrated in FIG. 1. For example, the process 1500 may be stored in a storage medium (e.g., the network storage device 113, or the storage 227 of the computing device 200) as a form of instructions, and invoked and/or executed by the processing device 104. The operations in the process 1500 presented below are intended to be illustrative. In some embodiments, the process 1500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1500 as illustrated in FIG. 15 and described below may not be intended to be limiting. For a better understanding of the present disclosure, the process 1500 for decoding the symbol characters is described in combination with FIG. 16, which is a schematic diagram of an exemplary PDF 417 barcode.
In 1505, the processing device 104 may obtain a symbol image of a symbol including a plurality of symbol characters in a symbol region. In some embodiments, the symbol image 1600 may be obtained from the image source 101 of the image processing system 100. As shown in FIG. 15, the PDF 417 barcode in the symbol image 1600 may include a start symbol character in a region EADF, a symbol region ABCD, and an end symbol character BGHC. The PDF 417 barcode may include a plurality of symbol characters in the symbol region ABCD.
In 1510, the processing device 104 may determine a plurality of row lines along a length direction of the symbol.
In some embodiments, a plurality of scan lines may be determined in a positioning box. The positioning box may be used to determine a length direction of the PDF 417 barcode (e.g., the horizontal direction as shown in FIG. 16) . Each of the plurality of scan lines may traverse pixels in a same row along the length direction of the PDF 417 barcode. The plurality of scan lines may be determined as the row lines. As illustrated in FIG. 16, a plurality of row lines P1-P19 may be determined. Each of the plurality of row lines P1-P19 may traverse pixels in a same row in the symbol image 1600. Alternatively, each of the plurality of row lines P1-P19 may be determined by connecting pixels in a same row in the symbol image 1600 along the length direction of the PDF 417 barcode.
In 1515, the processing device 104 may determine a start boundary and an end boundary of the symbol region.
In some embodiments, the processing device 104 may determine the arrangement and/or widths of bars and spaces of the start symbol character and/or the end symbol character. The processing device 104 may further determine grey values of at least a part of pixels corresponding to the bars and spaces of the start symbol character and/or the end symbol character (also referred to as predetermined grey values corresponding to the start symbol character and/or the end symbol character) . The predetermined grey values corresponding to the start symbol character and/or the end symbol character may be compared with grey values of pixels on at least one of the plurality of row lines. Based on the comparison, the processing device 104 may identify at least one line segment on at least one of the plurality of row lines corresponding to the start symbol character and/or the end symbol character.
In some embodiments, the processing device 104 may identify two or more line segments corresponding to the start symbol character on the plurality of row lines. Each of the two or more line segments may include a start point and an end point. As illustrated in FIG. 16, start points of the start symbol character of the PDF  417 barcode may be intersections between a line segment EF and the plurality of row lines P1-P19. End points of the start symbol character of the PDF 417 barcode may be intersections between a line segment AD and the plurality of row lines P1-P19.
Start points of the two or more line segments corresponding to the start symbol character may form a start edge of the start symbol character. End points of the two or more line segments corresponding to the start symbol character may form the end edge of the start symbol character. In some embodiments, a random sample consensus (RANSAN) algorithm may be used to determine the end edge of the start symbol character based on the end points of the two or more line segments corresponding to the start symbol character. In some embodiments, the end edge of the start symbol character may be determined by fitting a line according to the RANSAN algorithm based on the end points of the two or more line segments corresponding to the start symbol character. As for the PDF 417 barcode, the start symbol character may be out of the symbol region. The end edge of the start symbol character may coincide with the start boundary of the symbol region. In this case, the start boundary of the symbol region may be determined. Referring to FIG. 16, the start boundary of the symbol region may be A1.
Similarly, the processing device 104 may identify two or more line segments corresponding to the end symbol character on the plurality of row lines. Each of the two or more line segments may include a start point and an end point. As illustrated in FIG. 16, start points of the end symbol character of the PDF 417 barcode may be intersections between a line segment BC and the plurality of row lines P1-P19. End points of the end symbol character of the PDF 417 barcode may be intersections between a line segment GH and the plurality of row lines P1-P19.
Start points of the two or more line segments corresponding to the end symbol character may form a start edge of the end symbol character. End points of the two or more line segments corresponding to the end symbol character may form the end edge of the end symbol character. In some embodiments, a random sample consensus (RANSAN) algorithm may be used to determine the start edge of  the end symbol character based on the start points of the two or more line segments corresponding to the end symbol character. In some embodiments, the start edge of the end symbol character may be determined by fitting a line according to the RANSAN algorithm based on the start points of the two or more line segments corresponding to the end symbol character. As for the PDF 417 barcode, the end symbol character may be out of the symbol region. The start edge of the end symbol character may coincide with the end boundary of the symbol region. In this case, the end boundary of the symbol region may be determined. Referring to FIG. 16, the end boundary of the symbol region may be A5.
In 1520, the processing device 104 may determine, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters between the start boundary and the end boundary of the symbol region.
The processing device 104 may obtain a length of at least one line segment corresponding to the start symbol character. The length of the at least one line segment corresponding to the start symbol character may be designated as the width of the start symbol character. For example, referring to FIG. 16, a line segment between an intersection O1 of the row line P1 and the line segment BC and an intersection O2 of the row line P1 and the line segment AD may be a line segment corresponding to the start symbol character of the PDF 417 barcode 1600. A length of the line segment O1 O2 may be designated as the width of the start symbol character.
The processing device 104 may determine the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range. The reference width range may be a range defined an increment (e.g., two pixels) and a decrement (e.g., two pixels) .
In some embodiments, for two or more of the plurality of row lines, the processing device 104 may identify a plurality of pixels or points (e.g., pixels, midpoints each of which being between two consecutive pixels, etc. ) on at least two of the plurality of row lines (e.g., P1-P19) in the symbol region that satisfy the column  boundary characteristics between adjacent symbol characters and the reference width range. The RANSAN algorithm may be used to determine the plurality of column boundaries based on the plurality of points. Merely for illustration, the plurality of column boundaries A2, A3, A4, and A5 may be determined.
In 1525, the processing device 104 may determine an upper boundary and a lower boundary of the symbol region.
For each of the plurality of column boundaries, the processing device 104 may determine a plurality of intersections (e.g., pixels) of the plurality of row lines and the column boundary. The processing device 104 may perform an upward traverse and identify an upper intersection among the plurality of intersections. A pixel above the upper intersection (also referred to as upper pixel) along the column boundary and a pixel subsequent to the upper pixel along the row direction (also referred to as subsequent pixel of the upper pixel) in a same row may be determined. The upper pixel and the subsequent pixel of the upper pixel may be located at different sides of the column boundary.
The processing device 104 may determine whether the upper intersection and the subsequent pixel of the upper intersection, and the upper pixel and the subsequent pixel of the upper pixel satisfy the column boundary characteristics (i.e., the color of the upper pixel is white and the color of the subsequent pixel of the upper pixel is black, the colors of the two pixels may change from white to black along the row direction) . If the upper pixel and the subsequent pixel of the upper pixel do not satisfy the column boundary characteristics and the upper intersection and the pixel subsequent to the upper intersection along the row direction satisfy the column boundary characteristics (i.e., the upper pixel and the upper intersection satisfy upper boundary characteristics) , it may indicate that the upper intersection and the subsequent pixel of the upper intersection may be on the upper boundary of the symbol region.
Referring to FIG. 16, a pixel O3, which is an intersection of a column boundary A2 and a row line P1, may be determined as an upper intersection. A pixel subsequent to the upper intersection O3 along the row direction (also referred  to as subsequent pixel of the upper intersection) may be O6. The upper intersection O3 and the subsequent pixel of the upper intersection O6 satisfy the column boundary characteristics, and an upper pixel O4 of the upper intersection O3 and a subsequent pixel of the upper pixel O5 do not satisfy the column boundary characteristics (i.e., the upper pixel O4 and the upper intersection O3 satisfy upper boundary characteristics) , it may indicate that the upper intersection O3 and the subsequent pixel of the upper intersection O4 may be on the upper boundary of the symbol region. Similarly, pixels on the upper boundary of the symbol region may be determined according to the column boundaries A3, A4, and A5.
In some embodiments, the RANSAN algorithm may be used to determine the upper boundary of the symbol region based on the upper intersection and the subsequent pixel of the upper intersection corresponding to each of the plurality of column boundaries. The upper boundary may be P1 as illustrated in FIG. 16.
Similarly, for each of the plurality of column boundaries, the processing device 104 may perform a downward traverse and identify a lower intersection among the plurality of intersections. A pixel beneath the lower intersection (also referred to as lower pixel) along the column boundary and a pixel subsequent to the lower pixel along the row direction (also referred to as subsequent pixel of the lower pixel) in a same row may be determined. The lower pixel and the subsequent pixel of the lower pixel may be located at different sides of the column boundary.
The processing device 104 may determine whether the lower intersection and the subsequent pixel of the lower intersection, and the lower pixel and the subsequent pixel of the lower pixel satisfy the column boundary characteristics. If the lower pixel and the subsequent pixel of the lower pixel do not satisfy the column boundary characteristics and the lower intersection and the pixel subsequent to the lower intersection along the row direction satisfy the column boundary characteristics (i.e., the lower pixel and the lower intersection satisfy lower boundary characteristics) , it may indicate that the lower intersection and the subsequent pixel of the lower intersection may be on the lower boundary of the symbol region.
Referring to FIG. 16, a pixel O7, which is an intersection of a column boundary A2 and a row line P19, may be determined as a lower intersection. A pixel subsequent to the lower intersection O7 along the row direction (also referred to as subsequent pixel of the lower intersection) may be O8. The lower intersection O7 and the subsequent pixel of the lower intersection O8 may satisfy the column boundary characteristics, and an lower pixel O9 of the lower intersection O7 and a subsequent pixel of the lower pixel O10 do not satisfy the column boundary characteristics (i.e., the lower pixel O9 and the lower intersection O7 satisfy lower boundary characteristics) , it may indicate that the lower intersection O7 and the subsequent pixel of the lower intersection O8 may be on the lower boundary of the symbol region. Similarly, pixels on the lower boundary of the symbol region may be determined according to the column boundaries A3, A4, and A5.
In some embodiments, the RANSAN algorithm may be used to determine the lower boundary of the symbol region based on the lower intersection and the subsequent pixel of the lower intersection corresponding to each of the plurality of column boundaries. The lower boundary may be P19 as illustrated in FIG. 16.
In 1530, the processing device 104 may determine a plurality of row boundaries among the plurality of symbol characters between the upper boundary and the lower boundary of the symbol region.
In some embodiments, the plurality of row boundaries may be determined by identifying the plurality of row boundaries from the plurality of row lines based on boundary characteristics between adjacent symbol characters. In some embodiments, widths of bars and spaces between two adjacent symbol characters in the column direction may be different. Accordingly, a row boundary between the two adjacent symbol characters in the column direction may have characteristics (also referred to as row boundary characteristics) that the colors of all the pixels on the row boundary may be the same as the colors of corresponding pixels on a first adjacent row line of the row boundary, and the color of at least one pixel on the row boundary may be different from the color of corresponding pixel on a second adjacent row line of the row boundary. In some embodiments, the first adjacent row  line may be above the row boundary, and the second adjacent row line may be blow the row boundary. Alternatively, the first adjacent row line may be below the row boundary, and the second adjacent row line may be above the row boundary. In this case, grey values of all the pixels on the row boundary may be the same as or close to grey values of corresponding pixels on the first adjacent row line of the row boundary, and the grey value of at least one pixel on the row boundary may be different from the grey value of corresponding pixel on the second adjacent row line of the row boundary.
Referring to FIG. 16, row lines P4, P7, P10, P13, and P16 may satisfy the row boundary characteristics. The row lines P4, P7, P10, P13, and P16 may be designated as the row boundaries among the plurality of symbol characters.
In 1535, the processing device 104 may determine a character region corresponding to the symbol character for each of the plurality of symbol characters based on the plurality of column boundaries and the plurality of row boundaries.
Referring to FIG. 16, a plurality of character regions corresponding to the symbol characters may be determined based on the row boundaries P4, P7, P10, P13, and P16 and the column boundaries A2, A3, and A4.
In 1540, the processing device 104 may decode each symbol character based on grey values associated with the character region corresponding to the symbol character. In some embodiments, the operation for decoding each symbol character may be similar to or the same as the operations 1410 through 1440 of the process 1400 as illustrated in FIG. 14, and will not be repeated here.
Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.
Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment, ” “an embodiment, ” and “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “module, ” “unit, ” “component, ” “device, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate  medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existring server or mobile device.
Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, claim subject matter lie in less than all features of a single foregoing disclosed embodiment.

Claims (25)

  1. A system, comprising:
    at least one storage device storing a set of instructions; and
    at least one processor configured to communicate with the at least one storage device, wherein when executing the set of instructions, the at least one processor is directed to perform operations including:
    obtaining a symbol image of a symbol including a plurality of symbol characters in a symbol region;
    determining a plurality of row lines along a length direction of the symbol;
    determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters, each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters;
    determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and
    for each of the plurality of symbol characters,
    determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and
    decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.
  2. The system of claim 1, wherein determining, based on the plurality of row lines, the plurality of column boundaries among the plurality of symbol characters includes:
    determining a width of a reference symbol character associated with the plurality of symbol characters;
    determining a reference width range based on the width of the reference symbol character; and
    determining the plurality of column boundaries among the plurality of symbol  characters based on boundary characteristics between adjacent symbol characters and the reference width range.
  3. The system of claim 2, wherein the reference symbol character includes a start symbol character or an end symbol character.
  4. The system of claim 2, wherein determining the width of the reference symbol character includes:
    obtaining a preset codeword string associated with the reference symbol character;
    for at least one of the plurality of row lines, identifying at least one reference line segment based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with the preset codeword string; and
    designating a length of the at least one reference line segment as the width of the reference symbol character.
  5. The system of any one of claims 1-4, wherein determining, based on the plurality of row lines, the plurality of row boundaries among the plurality of symbol characters includes:
    identifying the plurality of row boundaries among the plurality of symbol characters from the plurality of row lines based on boundary characteristics between adjacent symbol characters.
  6. The system of any one of claims 1-5, wherein for each of the plurality of symbol characters, decoding the symbol character based on grey values associated with the character region corresponding to the symbol character includes:
    dividing, along a row direction, the character region corresponding to the symbol character into a plurality of blocks;
    determining a global gray value of each of the plurality of blocks;
    determining a contrast value of the symbol character based on the global grey  values of the plurality of blocks; and
    determining a codeword corresponding to the symbol character based on the contrast value.
  7. The system of claim 6, wherein determining the contrast value of the symbol character based on the global grey values of the plurality of blocks includes:
    determining a first ratio of grey values of blocks of a first type in the character region to a count of the blocks of the first type;
    determining a second ratio of grey values of blocks of a second type in the character region to a count of the blocks of the second type; and
    determining the contrast value of the symbol character based on a difference value between the first ratio and the second ratio.
  8. The system of any one of claims 1-7, the operations further including:
    determining a start boundary, an end boundary, an upper boundary, and a lower boundary of the symbol region.
  9. The system of claim 8, wherein determining the start boundary of the symbol region includes:
    for at least one of the plurality of row lines, identifying at least one end point of a start symbol character based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with a start codeword string; and
    determining the start boundary of the symbol region based on the at least one end point of the start symbol character.
  10. The system of claim 8 or claim 9, wherein determining the end boundary of the symbol region includes:
    for at least one of the plurality of row lines, identifying at least one start point of an end symbol character based on grey values of pixels on the at least one of the  plurality of row lines and predetermined grey values associated with an end codeword string; and
    determining the end boundary of the symbol region based on the at least one start point of the end symbol character.
  11. The system of any one of claims 8-10, wherein determining the upper boundary of the symbol region includes:
    for each of the plurality of column boundaries,
    determining a plurality of intersections of the plurality of row lines and the column boundary;
    performing an upward traverse until an upper pixel of a first intersection is identified, wherein the upper pixel and the first intersection satisfy upper boundary characteristics; and
    determining the upper boundary of the symbol region based on the first intersection of each of the plurality of column boundaries.
  12. The system of any one of claims 8-11, wherein determining the lower boundary of the symbol region includes:
    for each of the plurality of column boundaries,
    determining a plurality of intersections of the plurality of row lines and the column boundary;
    performing a downward traverse until a lower pixel of a second intersection is identified, wherein the lower pixel and the second intersection satisfy lower boundary characteristics; and
    determining the lower boundary of the symbol region based on the second intersection of each of the plurality of column boundaries.
  13. A method implemented on a computing device having a processor and a computer-readable storage device, the method comprising:
    obtaining a symbol image of a symbol including a plurality of symbol characters  in a symbol region;
    determining a plurality of row lines along a length direction of the symbol;
    determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters, each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters;
    determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and
    for each of the plurality of symbol characters,
    determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and
    decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.
  14. The method of claim 13, wherein determining, based on the plurality of row lines, the plurality of column boundaries among the plurality of symbol characters includes:
    determining a width of a reference symbol character associated with the plurality of symbol characters;
    determining a reference width range based on the width of the reference symbol character; and
    determining the plurality of column boundaries among the plurality of symbol characters based on boundary characteristics between adjacent symbol characters and the reference width range.
  15. The method of claim 14, wherein the reference symbol character includes a start symbol character or an end symbol character.
  16. The method of claim 14, wherein determining the width of the reference symbol character includes:
    obtaining a preset codeword string associated with the reference symbol character;
    for at least one of the plurality of row lines, identifying at least one reference line segment based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with the preset codeword string; and
    designating a length of the at least one reference line segment as the width of the reference symbol character.
  17. The method of any one of claims 1-16, wherein determining, based on the plurality of row lines, the plurality of row boundaries among the plurality of symbol characters includes:
    identifying the plurality of row boundaries among the plurality of symbol characters from the plurality of row lines based on boundary characteristics between adjacent symbol characters.
  18. The method of any one of claims 1-17, wherein for each of the plurality of symbol characters, decoding the symbol character based on grey values associated with the character region corresponding to the symbol character includes:
    dividing, along a row direction, the character region corresponding to the symbol character into a plurality of blocks;
    determining a global gray value of each of the plurality of blocks;
    determining a contrast value of the symbol character based on the global grey values of the plurality of blocks; and
    determining a codeword corresponding to the symbol character based on the contrast value.
  19. The method of claim 18, wherein determining the contrast value of the symbol character based on the global grey values of the plurality of blocks includes:
    determining a first ratio of grey values of blocks of a first type in the character region to a count of the blocks of the first type;
    determining a second ratio of grey values of blocks of a second type in the character region to a count of the blocks of the second type; and
    determining the contrast value of the symbol character based on a difference value between the first ratio and the second ratio.
  20. The method of any one of claims 1-19, the operations further including:
    determining a start boundary, an end boundary, an upper boundary, and a lower boundary of the symbol region.
  21. The method of claim 20, wherein determining the start boundary of the symbol region includes:
    for at least one of the plurality of row lines, identifying at least one end point of a start symbol character based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with a start codeword string; and
    determining the start boundary of the symbol region based on the at least one end point of the start symbol character.
  22. The method of claim 20 or claim 21, wherein determining the end boundary of the symbol region includes:
    for at least one of the plurality of row lines, identifying at least one start point of an end symbol character based on grey values of pixels on the at least one of the plurality of row lines and predetermined grey values associated with an end codeword string; and
    determining the end boundary of the symbol region based on the at least one start point of the end symbol character.
  23. The method of any one of claims 20-22, wherein determining the upper boundary of the symbol region includes:
    for each of the plurality of column boundaries,
    determining a plurality of intersections of the plurality of row lines and the column boundary;
    performing an upward traverse until an upper pixel of a first intersection is identified, wherein the upper pixel and the first intersection satisfy upper boundary characteristics; and
    determining the upper boundary of the symbol region based on the first intersection of each of the plurality of column boundaries.
  24. The method of any one of claims 20-23, wherein determining the lower boundary of the symbol region includes:
    for each of the plurality of column boundaries,
    determining a plurality of intersections of the plurality of row lines and the column boundary;
    performing a downward traverse until a lower pixel of a second intersection is identified, wherein the lower pixel and the second intersection satisfy lower boundary characteristics; and
    determining the lower boundary of the symbol region based on the second intersection of each of the plurality of column boundaries.
  25. A non-transitory readable medium, comprising at least one set of instructions, wherein when executed by at least one processor of a computing device, the at least one set of instructions directs the at least one processor to perform a method, the method comprising:
    obtaining a symbol image of a symbol including a plurality of symbol characters in a symbol region;
    determining a plurality of row lines along a length direction of the symbol;
    determining, based on the plurality of row lines, a plurality of column boundaries among the plurality of symbol characters, each of the plurality of column boundaries corresponding to two consecutive columns of two symbol characters of the plurality of symbol characters;
    determining, based on the plurality of row lines, a plurality of row boundaries among the plurality of symbol characters, each of the plurality of row boundaries corresponding to two adjacent rows of the plurality of symbol characters; and
    for each of the plurality of symbol characters,
    determining a character region corresponding to the symbol character based on the plurality of column boundaries and the plurality of row boundaries; and
    decoding the symbol character based on grey values associated with the character region corresponding to the symbol character.
PCT/CN2021/091910 2020-05-07 2021-05-06 Systems and methods for barcode decoding WO2021223709A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022566603A JP7481494B2 (en) 2020-05-07 2021-05-06 Systems and methods for barcode decoding
KR1020227040154A KR20230002813A (en) 2020-05-07 2021-05-06 Systems and methods for decoding barcodes

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010378011.9A CN111476054B (en) 2020-05-07 2020-05-07 Decoding method and electronic equipment
CN202010378011.9 2020-05-07

Publications (1)

Publication Number Publication Date
WO2021223709A1 true WO2021223709A1 (en) 2021-11-11

Family

ID=71757288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/091910 WO2021223709A1 (en) 2020-05-07 2021-05-06 Systems and methods for barcode decoding

Country Status (4)

Country Link
JP (1) JP7481494B2 (en)
KR (1) KR20230002813A (en)
CN (1) CN111476054B (en)
WO (1) WO2021223709A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476054B (en) * 2020-05-07 2022-03-08 浙江华睿科技股份有限公司 Decoding method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184378A (en) * 2011-04-27 2011-09-14 茂名职业技术学院 Method for cutting portable data file (PDF) 417 standard two-dimensional bar code image
CN102521559A (en) * 2011-12-01 2012-06-27 四川大学 417 bar code identification method based on sub-pixel edge detection
US8313029B2 (en) * 2008-01-31 2012-11-20 Seiko Epson Corporation Apparatus and methods for decoding images
US20140291402A1 (en) * 2013-03-28 2014-10-02 Nidec Sankyo Corporation Stack barcode reader and stack barcode reading method
CN111476054A (en) * 2020-05-07 2020-07-31 浙江华睿科技有限公司 Decoding method and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005174128A (en) 2003-12-12 2005-06-30 Tohken Co Ltd Code-reading device
US7584402B2 (en) 2004-04-02 2009-09-01 Silverbrook Research Pty Ltd Data storage format for encoding a bit stream on or in a surface
JP5246146B2 (en) 2009-12-01 2013-07-24 コニカミノルタビジネステクノロジーズ株式会社 Image forming apparatus and image reading apparatus
CN101908122B (en) * 2010-06-01 2012-08-22 福建新大陆电脑股份有限公司 Bar space margin processing module, bar code identifying device and method thereof
CN101833640B (en) * 2010-06-01 2015-12-16 福建新大陆电脑股份有限公司 The empty boundary pixel point computing module of bar and computing method thereof
CN101908126B (en) * 2010-06-01 2015-10-07 福建新大陆电脑股份有限公司 PDF417 bar code decoding chip
CN103034831B (en) * 2011-09-30 2015-05-27 无锡爱丁阁信息科技有限公司 Method and system for identifying linear bar code
CN106446750B (en) * 2016-07-07 2018-09-14 深圳市华汉伟业科技有限公司 A kind of bar code read method and device
CN109388999B (en) * 2017-08-11 2021-09-17 杭州海康威视数字技术股份有限公司 Bar code identification method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8313029B2 (en) * 2008-01-31 2012-11-20 Seiko Epson Corporation Apparatus and methods for decoding images
CN102184378A (en) * 2011-04-27 2011-09-14 茂名职业技术学院 Method for cutting portable data file (PDF) 417 standard two-dimensional bar code image
CN102521559A (en) * 2011-12-01 2012-06-27 四川大学 417 bar code identification method based on sub-pixel edge detection
US20140291402A1 (en) * 2013-03-28 2014-10-02 Nidec Sankyo Corporation Stack barcode reader and stack barcode reading method
CN111476054A (en) * 2020-05-07 2020-07-31 浙江华睿科技有限公司 Decoding method and electronic equipment

Also Published As

Publication number Publication date
KR20230002813A (en) 2023-01-05
CN111476054B (en) 2022-03-08
CN111476054A (en) 2020-07-31
JP2023525500A (en) 2023-06-16
JP7481494B2 (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN108446698B (en) Method, device, medium and electronic equipment for detecting text in image
US10863202B2 (en) Encoding data in a source image with watermark image codes
US20190130169A1 (en) Image processing method and device, readable storage medium and electronic device
CA2851598C (en) Apparatus and method for automatically recognizing a qr code
US20150090792A1 (en) System for decoding two dimensional code and method thereof
US9177188B2 (en) Method and system for detecting detection patterns of QR code
CN107622504B (en) Method and device for processing pictures
US20160078336A1 (en) Apparatus and method for generating image-included two dimensional code
US20190318154A1 (en) Methods, systems, and media for evaluating images
WO2022105517A1 (en) Systems and methods for detecting traffic accidents
TWI696954B (en) Digital object unique identifier (DOI) recognition method and device
EP4136403A1 (en) Systems and methods for object measurement
US20210295529A1 (en) Method and system for image processing
WO2021223709A1 (en) Systems and methods for barcode decoding
WO2022247406A1 (en) Systems and methods for determining key frame images of video data
US20220398698A1 (en) Image processing model generation method, processing method, storage medium, and terminal
CN111767889A (en) Formula recognition method, electronic device and computer readable medium
CN110442719B (en) Text processing method, device, equipment and storage medium
WO2015068841A1 (en) Device for generating 2d code provided with image, and method thereof
US20210203994A1 (en) Encoding data in a source image with watermark image codes
WO2019205008A1 (en) Systems and methods for determining a reflective area in an image
CN115601616A (en) Sample data generation method and device, electronic equipment and storage medium
CN112237002A (en) Image processing method and apparatus
CN111753573B (en) Two-dimensional code image recognition method and device, electronic equipment and readable storage medium
WO2021078133A1 (en) Systems and methods for image processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21800708

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022566603

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227040154

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21800708

Country of ref document: EP

Kind code of ref document: A1