CN113051950B - Multi-bar code identification method and related equipment - Google Patents

Multi-bar code identification method and related equipment Download PDF

Info

Publication number
CN113051950B
CN113051950B CN201911381243.3A CN201911381243A CN113051950B CN 113051950 B CN113051950 B CN 113051950B CN 201911381243 A CN201911381243 A CN 201911381243A CN 113051950 B CN113051950 B CN 113051950B
Authority
CN
China
Prior art keywords
decoding
target
image
bar codes
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911381243.3A
Other languages
Chinese (zh)
Other versions
CN113051950A (en
Inventor
韩廷睿
王乐乐
吴花精灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911381243.3A priority Critical patent/CN113051950B/en
Publication of CN113051950A publication Critical patent/CN113051950A/en
Application granted granted Critical
Publication of CN113051950B publication Critical patent/CN113051950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14131D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a multi-bar code identification method, which comprises the following steps: acquiring a target image, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes; acquiring gradient information of the target image; determining M first areas and N second areas from the target image according to the gradient information, wherein each first area comprises one-dimensional code in the plurality of bar codes, each second area comprises one two-dimensional code in the plurality of bar codes, and M and N are positive integers; the M first regions and N second regions are coded. The terminal equipment can identify various bar codes included in the same image.

Description

Multi-bar code identification method and related equipment
Technical Field
The present disclosure relates to the field of image recognition, and in particular, to a multi-barcode recognition method and related devices.
Background
The bar code comprises a one-dimensional code and a two-dimensional code, and is a novel technology integrating coding, printing, identification, data acquisition and processing. The use of bar code scanning is a major trend in the circulation of commercial products in the future. The bar code technology can rapidly identify information such as varieties, manufacturers, production batches and the like of commodities, and has the functions of anti-counterfeiting, goods-cross prevention, logistics management, tracking and tracing and the like. Thus, the commodity can circulate rapidly, freely and widely.
However, the existing barcode recognition only supports single-type barcode recognition, and when an image to be recognized includes a plurality of types of barcodes, the barcodes cannot be accurately recognized.
Disclosure of Invention
The embodiment of the application provides a multi-bar code identification method, which is used for identifying the bar code type of each bar code based on gradient information, decoding each region where each bar code is located, so that terminal equipment can identify various bar codes included in the same image.
In a first aspect, an embodiment of the present application provides a multi-barcode identification method, where the method includes:
acquiring a target image, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes;
acquiring gradient information of the target image;
determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
the M first regions and N second regions are coded.
In the embodiment of the application, a target image is acquired, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes; acquiring gradient information of the target image; determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers; the M first regions and N second regions are coded. By the method, the terminal equipment identifies the bar code type of each bar code and the area where each bar code is located, and decodes each area, so that the terminal equipment can identify various bar codes included in the same image.
Optionally, in an implementation of the first aspect, the decoding the M first regions and the N second regions includes:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, in an implementation of the first aspect, the method further includes:
Obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
wherein the functions include at least one of:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, in an implementation of the first aspect, the method further includes:
generating corresponding prompt information based on each of L transcoded content, where the L transcoded content belongs to the m+n transcoded content, and the prompt information includes:
prompting of a function corresponding to the current decoding content, prompting of a function corresponding to the current decoding content which cannot be triggered, or recommendation information of an application program comprising the function corresponding to the current decoding content which can be triggered;
and displaying the L prompt messages, wherein L is a positive integer less than or equal to M+N.
Optionally, in an implementation of the first aspect, the method further includes:
Receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of the L prompt information;
and responding to the selection instruction, and triggering the function of decoding content corresponding to the target prompt information.
Optionally, in an implementation of the first aspect, the L decoding contents do not include decoding contents in which the corresponding barcode is a one-dimensional code.
Optionally, in an implementation of the first aspect, the method further includes:
the definition of each bar code in the M+N bar codes is obtained by carrying out the definition identification of the bar codes in the M first areas and the N second areas;
determining the L decoding contents from the plurality of decoding contents according to the definition of each bar code in the M+N bar codes, wherein the bar code corresponding to each decoding content in the L decoding contents is one of the bar codes of L before the M+N bar codes are ranked according to the definition from big to small.
Optionally, in an implementation of the first aspect, the method further includes:
the size of each bar code in the M+N bar codes is obtained by recognizing the sizes of the bar codes in the M first areas and the N second areas;
And determining the L decoding contents from the plurality of decoding contents according to the size of each bar code in the M+N bar codes, wherein the bar codes corresponding to the L decoding contents are the first L bar codes after the M+N bar codes are sequenced from large to small according to the definition.
Optionally, in an implementation of the first aspect, the target image is acquired by a barcode recognition function of a target application, and the method further includes:
acquiring the use frequency of the function corresponding to each decoding content in the plurality of decoding contents in the target application program;
and determining the L decoding contents from the plurality of decoding contents according to the use frequency of the function corresponding to each decoding content, wherein the use frequency of the function corresponding to the L decoding contents is one of the L bar codes before the M+N decoding contents are ordered according to the use frequency.
Optionally, in an implementation manner of the first aspect, a distance between a display position of each prompt message and a corresponding first target area is within a preset range, where the first target area is one of the M first areas and the N second areas.
Optionally, in an implementation of the first aspect, the method further includes:
and displaying an association identifier, wherein the association identifier is used for indicating the association relation between each prompt message and a corresponding first target area, and the first target area is one of the M first areas and the N second areas.
Optionally, in an implementation of the first aspect, the method further includes:
if a selection instruction of a user for the L prompt messages is not received within a preset time, determining first decoding content from the M+N decoding contents, wherein the use frequency of a function corresponding to the first decoding content is the highest in the plurality of decoding contents, the definition of a bar code corresponding to the first decoding content is the highest in the plurality of decoding contents, or the size of a bar code corresponding to the first decoding content is the largest in the plurality of decoding contents; the target image is acquired through a target application program, and the use frequency is the use frequency of the function corresponding to the decoded content by the target application program;
triggering the function corresponding to the first decoding content.
Optionally, in an implementation of the first aspect, before the decoding each of the M first regions and the N second regions, the method further includes:
and if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, rotating the second target area so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, wherein the second target area is one of the M first areas and the N second areas.
Optionally, in an implementation of the first aspect, the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information indicates pixel value variation information of each pixel point of the target image, the pixel value gradient direction indicates a pixel value maximum variation direction of each pixel point, and the pixel value gradient magnitude value indicates a pixel value variation magnitude of the pixel value maximum variation direction of each pixel point.
Optionally, in an implementation of the first aspect, the determining M first regions and N second regions from the target image according to gradient information includes:
Performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a first area.
Optionally, in an implementation of the first aspect, the determining M first regions and N second regions from the target image according to gradient information includes:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a second area.
Optionally, in an implementation of the first aspect, a pixel value gradient magnitude value of each of the M pixel points is greater than a second preset value.
Optionally, in an implementation of the first aspect, a sum of pixel value gradient magnitude values of the M pixel points is greater than a third preset value.
Optionally, in an implementation of the first aspect, the method further includes:
and carrying out morphological operation on the image region where the first bar code is located, extracting a maximum communication region obtained after the morphological operation based on a region communication algorithm, and determining the boundary of the maximum communication region as the boundary line of the first bar code.
In a second aspect, the present application provides a multi-barcode recognition method, the method comprising:
acquiring a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other;
decoding M target bar codes in the plurality of bar codes to obtain M decoding contents, wherein M is a positive integer;
generating corresponding prompt information based on each decoded content;
and outputting M prompting messages.
Optionally, in an implementation of the second aspect, the method further includes:
Receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of M prompt information;
responding to the selection instruction, triggering the function of the decoding content corresponding to the target prompt information, wherein the function at least comprises one of the following steps:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, in an implementation of the second aspect, the method further includes:
identifying that the plurality of bar codes comprises M two-dimensional codes and N one-dimensional codes;
and determining the M two-dimensional codes as the M target bar codes.
Optionally, in an implementation of the second aspect, the method further includes:
acquiring the definition of each bar code in the plurality of bar codes;
and determining M barcodes with front definition in the plurality of barcodes as the M target barcodes.
Optionally, in an implementation of the second aspect, the method further includes:
acquiring the size of each bar code in the plurality of bar codes;
and determining M barcodes with the front sizes as M target barcodes.
Optionally, in an implementation of the second aspect, the target image is acquired by a barcode recognition function of the target application, and the method further includes:
acquiring the use frequency of each bar code in the plurality of bar codes in the target application program;
and determining M barcodes with the front using frequency in the plurality of barcodes as the M target barcodes.
Optionally, in an implementation of the second aspect, the target image is acquired through a barcode recognition function of the target application, and the generating the corresponding prompt information based on each decoded content includes:
if the function of the first decoding content cannot be triggered by the target application program, generating first prompt information, wherein the first prompt information comprises a prompt that the function corresponding to the decoding content cannot be triggered or recommendation information of the application program that the function corresponding to the decoding content can be triggered;
if the function of the first decoding content can be triggered by the target application program, generating second prompt information, wherein the second prompt information comprises a prompt of the function corresponding to the first decoding content;
wherein the first transcoded content is one of the plurality of transcoded contents.
In a third aspect, the present application provides a barcode recognition method, the method comprising:
acquiring a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other;
decoding the plurality of bar codes to obtain decoding contents corresponding to the plurality of bar codes respectively;
determining a target bar code from the plurality of bar codes according to the decoding content corresponding to the plurality of bar codes respectively;
triggering the function of decoding content corresponding to the target bar code; wherein the functions include at least one of:
jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
Optionally, in an implementation of the third aspect, the target image is acquired by a barcode identification function of a target application, and determining, according to the decoded contents respectively corresponding to the plurality of barcodes, a target barcode from the plurality of barcodes includes:
acquiring the use frequency of the function corresponding to each decoding content in the target application program;
and determining the bar code with the highest using frequency in the plurality of bar codes as the target bar code.
In a fourth aspect, the present application provides a barcode recognition method, the method comprising:
displaying a first control and a second control, wherein the first control is used for triggering a first bar code identification method, the second control is used for triggering a second bar code identification method, and the first bar code identification method comprises the following steps: decoding at least two barcodes in an image comprising a plurality of barcodes, the second barcode identification method comprising: decoding one bar code in an image comprising a plurality of bar codes;
if a first selection operation of the user on the first control is received, executing the first bar code identification method on the acquired target image;
and if a second selection operation of the second control by the user is received, executing the second bar code identification method on the acquired target image.
Optionally, in an implementation of the fourth aspect, the first barcode recognition method further includes:
decoding one bar code in an image comprising the one bar code.
Optionally, in an implementation of the fourth aspect, the second barcode recognition method further includes:
decoding one bar code in an image comprising the one bar code.
Optionally, in an implementation manner of the fourth aspect, the first barcode recognition method specifically includes:
acquiring gradient information of an image comprising a plurality of bar codes;
determining M first areas and N second areas from the image comprising the plurality of bar codes according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
the M first regions and N second regions are coded.
Optionally, in an implementation of the fourth aspect, the decoding the M first regions and the N second regions includes:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, in an implementation of the fourth aspect, the method further includes:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function; wherein the functions include at least one of: jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
In a fifth aspect, the present application provides a terminal device, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target image, the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes; acquiring gradient information of the target image;
the determining module is used for determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
and the decoding module is used for decoding the M first areas and the N second areas.
Optionally, in an implementation manner of the fifth aspect, the decoding module is specifically configured to:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, in an implementation manner of the fifth aspect, the acquiring module is further configured to:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
Wherein the functions include at least one of:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, in an implementation of the fifth aspect, the terminal device further includes: an output module for:
generating corresponding prompt information based on each of L transcoded content, where the L transcoded content belongs to the m+n transcoded content, and the prompt information includes:
prompting of a function corresponding to the current decoding content, prompting of a function corresponding to the current decoding content which cannot be triggered, or recommendation information of an application program comprising the function corresponding to the current decoding content which can be triggered;
and displaying the L prompt messages, wherein L is a positive integer less than or equal to M+N.
Optionally, in an implementation of the fifth aspect, the terminal device further includes:
the receiving module is used for receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of the L prompt information;
the output module is further used for responding to the selection instruction and triggering the function of decoding content corresponding to the target prompt information.
Optionally, in an implementation of the fifth aspect, the L transcoded contents do not include transcoded contents whose corresponding bar codes are one-dimensional codes.
Optionally, in an implementation manner of the fifth aspect, the acquiring module is further configured to:
the definition of each bar code in the M+N bar codes is obtained by carrying out the definition identification of the bar codes in the M first areas and the N second areas;
the determining module is further configured to:
determining the L decoding contents from the plurality of decoding contents according to the definition of each bar code in the M+N bar codes, wherein the bar code corresponding to each decoding content in the L decoding contents is one of the bar codes of L before the M+N bar codes are ranked according to the definition from big to small.
Optionally, in an implementation manner of the fifth aspect, the acquiring module is further configured to:
the size of each bar code in the M+N bar codes is obtained by recognizing the sizes of the bar codes in the M first areas and the N second areas;
the determining module is further configured to:
and determining the L decoding contents from the plurality of decoding contents according to the size of each bar code in the M+N bar codes, wherein the bar codes corresponding to the L decoding contents are the first L bar codes after the M+N bar codes are sequenced from large to small according to the definition.
Optionally, in an implementation of the fifth aspect, the target image is acquired by a barcode recognition function of a target application, and the acquiring module is further configured to:
acquiring the use frequency of the function corresponding to each decoding content in the plurality of decoding contents in the target application program;
the determining module is further configured to:
and determining the L decoding contents from the plurality of decoding contents according to the use frequency of the function corresponding to each decoding content, wherein the use frequency of the function corresponding to the L decoding contents is one of the L bar codes before the M+N decoding contents are ordered according to the use frequency.
Optionally, in an implementation manner of the fifth aspect, a distance between a display position of each prompt message and a corresponding first target area is within a preset range, where the first target area is one of the M first areas and the N second areas.
Optionally, in an implementation of the fifth aspect, the output module is further configured to:
and displaying an association identifier, wherein the association identifier is used for indicating the association relation between each prompt message and a corresponding first target area, and the first target area is one of the M first areas and the N second areas.
Optionally, in an implementation of the fifth aspect, the output module is further configured to:
if a selection instruction of a user for the L prompt messages is not received within a preset time, determining first decoding content from the M+N decoding contents, wherein the use frequency of a function corresponding to the first decoding content is the highest in the plurality of decoding contents, the definition of a bar code corresponding to the first decoding content is the highest in the plurality of decoding contents, or the size of a bar code corresponding to the first decoding content is the largest in the plurality of decoding contents; the target image is acquired through a target application program, and the use frequency is the use frequency of the function corresponding to the decoded content by the target application program;
triggering the function corresponding to the first decoding content.
Optionally, in an implementation of the fifth aspect, the terminal device further includes:
and the rotating module is used for rotating the second target area if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, and the second target area is one of the M first areas and the N second areas.
Optionally, in an implementation of the fifth aspect, the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information indicates pixel value variation information of each pixel point of the target image, the pixel value gradient direction indicates a maximum variation direction of a pixel value of each pixel point, and the pixel value gradient magnitude value indicates a pixel value variation magnitude of the maximum variation direction of the pixel value of each pixel point.
Optionally, in an implementation manner of the fifth aspect, the determining module is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a first area.
Optionally, in an implementation manner of the fifth aspect, the determining module is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a second area.
In a sixth aspect, the present application provides a terminal device, including:
The acquisition module is used for acquiring a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other;
the decoding module is used for decoding M target bar codes in the plurality of bar codes to obtain M decoding contents, wherein M is a positive integer;
the generation module is used for generating corresponding prompt information based on each decoding content;
and the output module is used for outputting M prompting messages.
Optionally, in an implementation of the sixth aspect, the terminal device further includes:
the receiving module is used for receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of the M prompt information;
the output module is used for responding to the selection instruction and triggering the function of the decoding content corresponding to the target prompt information, and the function at least comprises one of the following steps:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, in an implementation manner of the sixth aspect, the acquiring module is further configured to:
Identifying that the plurality of bar codes comprises M two-dimensional codes and N one-dimensional codes;
the determining module is further configured to:
and determining the M two-dimensional codes as the M target bar codes.
Optionally, in an implementation manner of the sixth aspect, the acquiring module is further configured to:
acquiring the definition of each bar code in the plurality of bar codes;
the determining module is further configured to:
and determining M barcodes with front definition in the plurality of barcodes as the M target barcodes.
Optionally, in an implementation manner of the sixth aspect, the acquiring module is further configured to:
acquiring the size of each bar code in the plurality of bar codes;
the determining module is further configured to:
and determining M barcodes with the front sizes as M target barcodes.
Optionally, in an implementation of the sixth aspect, the target image is acquired by a barcode recognition function of a target application, and the acquiring module is further configured to:
acquiring the use frequency of each bar code in the plurality of bar codes in the target application program;
the determining module is further configured to:
and determining M barcodes with the front using frequency in the plurality of barcodes as the M target barcodes.
Optionally, in an implementation of the sixth aspect, the target image is acquired through a barcode recognition function of a target application, and the generating module is specifically configured to:
if the function of the first decoding content cannot be triggered by the target application program, generating first prompt information, wherein the first prompt information comprises a prompt that the function corresponding to the decoding content cannot be triggered or recommendation information of the application program that the function corresponding to the decoding content can be triggered;
if the function of the first decoding content can be triggered by the target application program, generating second prompt information, wherein the second prompt information comprises a prompt of the function corresponding to the first decoding content;
wherein the first transcoded content is one of the plurality of transcoded contents.
In a seventh aspect, the present application provides a terminal device, including:
the acquisition module is used for acquiring a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other;
the decoding module is used for decoding the plurality of bar codes to obtain decoding contents corresponding to the plurality of bar codes respectively;
the determining module is used for determining a target bar code from the plurality of bar codes according to the decoding content corresponding to the plurality of bar codes respectively;
The output module is used for triggering the function of the decoding content corresponding to the target bar code; wherein the functions include at least one of:
jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
Optionally, in an implementation of the seventh aspect, the target image is acquired through a barcode recognition function of a target application, and the determining module is specifically configured to:
acquiring the use frequency of the function corresponding to each decoding content in the target application program;
and determining the bar code with the highest using frequency in the plurality of bar codes as the target bar code.
In an eighth aspect, the present application provides a terminal device, including:
the output module is used for displaying a first control and a second control, wherein the first control is used for triggering the first decoding module, the second control is used for triggering the second decoding module, and the first decoding module is used for: decoding at least two barcodes in an image comprising a plurality of barcodes, the second decoding module being configured to: decoding one bar code in an image comprising a plurality of bar codes;
If a first selection operation of a user on the first control is received, a first decoding module is triggered to decode the acquired target image;
and if receiving a second selection operation of the second control by the user, triggering a second decoding module to decode the acquired target image.
Optionally, in an implementation of the eighth aspect, the first decoding module is further configured to:
decoding one bar code in an image comprising the one bar code.
Optionally, in an implementation of the eighth aspect, the second decoding module is further configured to:
decoding one bar code in an image comprising the one bar code.
Optionally, in an implementation manner of the eighth aspect, the first decoding module is specifically configured to:
acquiring gradient information of an image comprising a plurality of bar codes;
determining M first areas and N second areas from the image comprising the plurality of bar codes according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
the M first regions and N second regions are coded.
Optionally, in an implementation manner of the eighth aspect, the first decoding module is specifically configured to:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, in an implementation of the eighth aspect, the first decoding module is further configured to:
obtaining a plurality of decoding contents obtained after the first decoding module executes decoding, wherein the plurality of decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function; wherein the functions include at least one of: jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
In a ninth aspect, an embodiment of the present application provides a multi-barcode identification method, applied to a server, where the method includes:
receiving a target image sent by a terminal device, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes;
Acquiring gradient information of the target image;
determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
the M first regions and N second regions are coded.
Optionally, in an implementation manner of the ninth aspect, the decoding the M first regions and the N second regions includes:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, in an implementation of the ninth aspect, the method further includes:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
wherein the functions include at least one of:
jumping to a corresponding webpage;
Opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, in an implementation of the ninth aspect, before the decoding each of the M first regions and the N second regions, the method further includes:
and if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, rotating the second target area so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, wherein the second target area is one of the M first areas and the N second areas.
Optionally, in an implementation of the ninth aspect, the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information indicates pixel value variation information of each pixel point of the target image, the pixel value gradient direction indicates a maximum variation direction of a pixel value of each pixel point, and the pixel value gradient magnitude value indicates a pixel value variation magnitude of the maximum variation direction of the pixel value of each pixel point.
Optionally, in an implementation of the ninth aspect, the determining M first regions and N second regions from the target image according to gradient information includes:
performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a first area.
Optionally, in an implementation of the ninth aspect, the determining M first regions and N second regions from the target image according to gradient information includes:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a second area.
Optionally, in an implementation manner of the ninth aspect, a pixel value gradient magnitude value of each of the M pixel points is greater than a second preset value.
Optionally, in an implementation manner of the ninth aspect, a sum of pixel value gradient magnitude values of the M pixel points is greater than a third preset value.
Optionally, in an implementation of the ninth aspect, the method further includes:
and carrying out morphological operation on the image region where the first bar code is located, extracting a maximum communication region obtained after the morphological operation based on a region communication algorithm, and determining the boundary of the maximum communication region as the boundary line of the first bar code.
In a tenth aspect, the present application provides a server, the server comprising:
the acquisition module is used for receiving a target image sent by the terminal equipment, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes; acquiring gradient information of the target image;
the determining module is used for determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
And the decoding module is used for decoding the M first areas and the N second areas.
Optionally, in an implementation manner of the tenth aspect, the decoding module is specifically configured to:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, in an implementation manner of the tenth aspect, the acquiring module is further configured to:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
wherein the functions include at least one of:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, in an implementation of the tenth aspect, the terminal device further includes:
and the rotating module is used for rotating the second target area if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, and the second target area is one of the M first areas and the N second areas.
Optionally, in an implementation of the tenth aspect, the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information indicates pixel value variation information of each pixel point of the target image, the pixel value gradient direction indicates a maximum variation direction of a pixel value of each pixel point, and the pixel value gradient magnitude value indicates a pixel value variation magnitude of the maximum variation direction of the pixel value of each pixel point.
Optionally, in an implementation manner of the tenth aspect, the determining module is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a first area.
Optionally, in an implementation manner of the tenth aspect, the determining module is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a second area.
In an eleventh aspect, the present application provides a terminal device, including: one or more processors; one or more memories; a plurality of applications; and one or more programs, wherein the one or more programs are stored in the memory, which when executed by the processor, cause the terminal device to perform the steps of any of the first to fourth aspects and possible implementations of any of the above aspects.
In a twelfth aspect, the present application provides a server, including: one or more processors; one or more memories; and one or more programs, wherein the one or more programs are stored in the memory, which when executed by the processor, cause the server to perform the steps of any of the foregoing ninth and ninth possible implementations.
In a thirteenth aspect, the present application provides an apparatus, which is included in a terminal device, and which has a function of implementing the actions of any one of the above first to fourth aspects and possible implementations of any one of the above aspects. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above.
In a fourteenth aspect, the present application provides a terminal device, including: a display screen; a camera; one or more processors; a memory; a plurality of applications; and one or more computer programs. Wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by an electronic device, cause the electronic device to perform the steps of any one of the possible implementations of any one of the first to fourth aspects and any one of the possible implementations of any one of the aspects.
In a fifteenth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on an electronic device or server, cause the electronic device to perform any one of the possible methods of the above aspects.
In a sixteenth aspect, the present application provides a computer program product for, when run on an electronic device or a server, causing the electronic device to perform any one of the possible methods of the above aspects.
The embodiment of the application provides a multi-bar code identification method, which comprises the following steps: acquiring a target image, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes; acquiring gradient information of the target image; determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers; the M first regions and N second regions are coded. By the method, the terminal equipment identifies the bar code type of each bar code and the area where each bar code is located, and decodes each area, so that the terminal equipment can identify various bar codes included in the same image.
Drawings
Fig. 1 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 2 is a software architecture block diagram of a terminal device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an embodiment of a multi-barcode recognition method according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a scenario provided in an embodiment of the present application;
FIG. 5 is a schematic representation of a gradient image in an embodiment of the present application;
FIG. 6 is an image partition illustration of a target image in an embodiment of the present application;
FIG. 7a is a schematic diagram of a plurality of sub-image recognition results according to an embodiment of the present application;
FIG. 7b is a schematic diagram of a plurality of sub-image recognition results according to an embodiment of the present application;
fig. 8a and 8b are schematic views of image processing in the present embodiment;
FIG. 8c is a schematic representation of a target image;
FIG. 8d is an illustration of the identification of an image;
FIG. 8e is an illustration of the identification of an image;
FIG. 9 is a schematic illustration of a bar code identification method provided in an embodiment of the present application;
FIG. 10 is a schematic illustration of a bar code identification method provided in an embodiment of the present application;
FIG. 11 is a schematic illustration of a bar code identification method provided in an embodiment of the present application;
fig. 12a is an application example schematic of a multi-barcode recognition method provided in an embodiment of the present application;
FIGS. 12 b-12 d are schematic flow diagrams of a multi-barcode recognition method according to embodiments of the present disclosure;
FIG. 13 is an exemplary illustration of a multi-barcode recognition method provided in an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 17 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 18 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 20 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention. The terminology used in the description of the embodiments of the invention herein is for the purpose of describing particular embodiments of the invention only and is not intended to be limiting of the invention.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which the embodiments of the application described herein have been described for objects of the same nature. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, the structure of the terminal device 100 provided in the embodiment of the present application will be exemplified below. Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
As shown in fig. 1, the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the terminal device 100. In other embodiments of the present application, terminal device 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural center and a command center of the terminal device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I1C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I1S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not constitute a structural limitation of the terminal device 100. In other embodiments of the present application, the terminal device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the terminal device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
In some possible embodiments, the terminal device 100 may communicate with other devices using wireless communication functionality. For example, the terminal device 100 may communicate with the second terminal device 200, the terminal device 100 establishes a screen-casting connection with the second terminal device 200, the terminal device 100 outputs screen-casting data to the second terminal device 200, and the like. The screen projection data output by the terminal device 100 may be audio/video data. The communication process between the terminal device 100 and the second terminal device 200 may refer to the related description of the subsequent embodiments, which is not repeated here.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the terminal device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 1G/3G/4G/5G wireless communication applied to the terminal device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied to the terminal device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 1, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of terminal device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that terminal device 100 may communicate with a network and other devices via wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The terminal device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the terminal device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In some possible embodiments, the display 194 may be used to display various interfaces of the system output of the terminal device 100. The respective interfaces outputted from the terminal device 100 may be referred to the related description of the subsequent embodiments.
The terminal device 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the terminal device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals.
Video codecs are used to compress or decompress digital video. The terminal device 100 may support one or more video codecs. In this way, the terminal device 100 can play or record video in various encoding formats, for example: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the terminal device 100 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to realize expansion of the memory capability of the terminal device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the terminal device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (such as audio data, phonebook, etc.) created during use of the terminal device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The terminal device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc. In some possible implementations, the audio module 170 may be used to play sound corresponding to video. For example, when the display 194 displays a video playback screen, the audio module 170 outputs the sound of the video playback.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals.
The earphone interface 170D is used to connect a wired earphone. The earphone interface 170D may be a USB interface 130 or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The gyro sensor 180B may be used to determine a motion gesture of the terminal device 100. The air pressure sensor 180C is used to measure air pressure.
The acceleration sensor 180E can detect the magnitude of acceleration of the terminal device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the terminal device 100 is stationary. The method can also be used for identifying the gesture of the terminal equipment, and is applied to the applications such as horizontal and vertical screen switching, pedometers and the like.
A distance sensor 180F for measuring a distance.
The ambient light sensor 180L is used to sense ambient light level.
The fingerprint sensor 180H is used to collect a fingerprint.
The temperature sensor 180J is for detecting temperature.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the terminal device 100 at a different location than the display 194.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The terminal device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the terminal device 100.
The motor 191 may generate a vibration cue.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card.
The above is a description of the structure of the terminal device 100, and the software structure of the terminal device will be described next. The software system of the terminal device 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the terminal device 100 is illustrated.
Fig. 2 is a software configuration block diagram of the terminal device 100 of the embodiment of the present application.
As shown in fig. 2, the hierarchical architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The interface content may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the terminal device 200. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal equipment vibrates, and an indicator light blinks.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
For easy understanding, the following embodiments of the present application will take an electronic device having a structure shown in fig. 1 and fig. 2 as an example, and specifically describe a multi-barcode identification method provided in the embodiments of the present application in conjunction with the accompanying drawings and application scenarios.
Referring to fig. 3, fig. 3 is a schematic diagram of an embodiment of a multi-barcode identification method provided in an embodiment of the present application, and as shown in fig. 3, the multi-barcode identification method provided in the present application includes:
301. The terminal equipment acquires a target image, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes.
In the embodiment of the application, the terminal device can acquire the target image comprising a plurality of bar codes.
Next, how the terminal device acquires the target image will be described.
1. The terminal equipment can shoot through a camera carried by the terminal equipment to acquire a target image.
In one embodiment, a user may open a barcode recognition function on some application program and select a barcode scanning function, where the function may call a camera on the terminal device, and further, the user may shoot a certain area through the camera of the terminal device, where the area is provided with a plurality of barcodes, so that the terminal device may obtain a target image including the plurality of barcodes. In one scenario, for some specific areas, a plurality of bar codes may be provided, for example, a plurality of bar codes may be printed in some areas of a poster, a food package bag, an envelope, etc., each bar code may have different functions after being scanned and decoded, and exemplary, the poster is printed with two-dimensional codes and one-dimensional code, wherein one two-dimensional code corresponds to the addition of WeChat friends, one two-dimensional code corresponds to the addition of WeChat public numbers, and one-dimensional code corresponds to the identification of a commodity (for scanning codes when purchase behavior occurs).
Referring to fig. 4, fig. 4 is a schematic view of a scenario provided in the embodiment of the present application, as shown in fig. 4, a user may use a camera of a terminal device 401 to photograph an area 402 printed with a plurality of barcodes, and accordingly, the terminal device 401 may obtain a target image including a plurality of barcodes 403.
In one embodiment, a user may open a barcode recognition function on some application program and select a barcode scanning function, where the function may call an external camera of the terminal device, where the external camera is connected to the terminal device, and further, the user may shoot a certain area through the external camera, where the area is provided with a plurality of barcodes, so that the terminal device may obtain a target image including the plurality of barcodes.
2. The terminal device can acquire a target image comprising a plurality of bar codes from a local album or a cloud album.
In one embodiment, a user may open a barcode recognition function on some application program, and select an image from a local album or a cloud album, where the function may open the local album or the cloud album on the terminal device, and the user may select a target image to be barcode-recognized from the local album or the cloud album on the terminal device, where the target image may include a plurality of barcodes, and further, the terminal device may obtain a target image including a plurality of barcodes from the local album or the cloud album.
In the embodiment of the application, the plurality of bar codes in the target image can comprise one-dimensional codes and two-dimensional codes, wherein the two-dimensional codes can be the same kind of two-dimensional codes or different kinds of two-dimensional codes; the one-dimensional bar codes can be the same kind of one-dimensional codes or different kinds of one-dimensional codes. By way of example, the types of one-dimensional codes may include, but are not limited to, EAN-8 one-dimensional bar codes, EAN-13, UPC-A one-dimensional bar codes, UPC-E one-dimensional bar codes, codabar one-dimensional bar codes, code 39 one-dimensional bar codes, code 93 one-dimensional bar codes, code 128 one-dimensional bar codes, and ITF one-dimensional bar codes. The types of two-dimensional codes may include, but are not limited to, PDF417, aztec, PDF417 two-dimensional bar Code, datamatrix two-dimensional bar Code, maxicode two-dimensional bar Code, code 49,Code 16K,Code one.
In this embodiment of the present application, the plurality of barcodes are not overlapped with each other. I.e. the boundaries between each barcode on the target image can be identified and demarcated. At the pixel level, the plurality of bar codes have no overlapped pixels, and the minimum circumscribed rectangular frames of the plurality of bar codes have no overlap.
302. And the terminal equipment acquires gradient information of the target image.
In this embodiment of the present invention, the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information indicates pixel value variation information of each pixel point of the target image, the pixel value gradient direction indicates a maximum variation direction of a pixel value of each pixel point, the pixel value gradient magnitude value indicates a pixel value variation magnitude of the maximum variation direction of a pixel value of each pixel point, and specifically, the terminal device may obtain the pixel value gradient magnitude value and the pixel value gradient direction of each pixel point of the target image based on a gradient detection algorithm. It should be noted that, the terminal device may obtain the gradient magnitude values and the gradient directions of the pixel values of all the pixel points in the target image, or may obtain only the gradient magnitude values and the gradient directions of the pixel values of a part of the pixel points, which is not limited in this application. Alternatively, in one embodiment, the gradient detection algorithm may be a sobel algorithm or a scharr algorithm, or the like.
In the embodiment of the application, the terminal device may acquire a gradient image of the target image based on a gradient detection algorithm, and the gradient image may represent gradient information of the target image. In addition, the terminal device may perform gradient amplitude filtering processing on the obtained gradient image, for example, filter pixels whose gradient amplitude is lower than a preset threshold, and specifically, if the gradient amplitude of a certain pixel in the gradient image is lower than the preset amplitude threshold, set the gradient amplitude to 0, so as to achieve a filtering effect. By way of example, referring to fig. 5, fig. 5 is a schematic representation of a gradient image in an embodiment of the present application.
It should be noted that, before acquiring the gradient information of the target image, the terminal device may further perform image preprocessing on the target image, where the preprocessing may include, but is not limited to, color-to-gray-scale processing, size-reduction and amplification processing, and filtering processing. That is, in the embodiment of the present application, the terminal device may acquire gradient information of the target image after preprocessing.
303. And the terminal equipment determines M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers.
In the embodiment of the present application, after acquiring gradient information in a target image, a terminal device may determine M first areas and N second areas from the target image based on the acquired gradient information, where the target image may include at least m+n barcodes, and the terminal device may identify barcode types of the m+n barcodes and an image area where each barcode is located. The M+N barcodes correspond to the M first areas and the N second areas.
Specifically, the terminal device may first perform image division on the target image to obtain multiple sub-images.
In one embodiment, the terminal device may divide the target image into equally sized rectangular grids of M1 x M2, each rectangular grid corresponding to one sub-image. Referring to fig. 6, fig. 6 is an image division illustration of a target image in the embodiment of the present application, and as shown in fig. 6, a terminal device may perform image division on the target image, and divide the target image into rectangular grids with an equal size of 10×10, to obtain multiple sub-images 601.
In the embodiment of the present application, after performing image division on a target image to obtain multiple sub-images, multiple sub-images corresponding to each barcode may be identified, and then description is made separately how the terminal device identifies multiple sub-images corresponding to the one-dimensional code and the two-dimensional code.
1. How the terminal device recognizes a plurality of sub-images corresponding to the one-dimensional code.
In the embodiment of the application, the terminal device may perform image division on the target image to obtain a plurality of sub-images; determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image; and determining the circumscribed rectangular area of the target sub-images as a first area.
Specifically, the plurality of barcodes may include a fifth barcode, the fifth barcode is a one-dimensional code, the terminal device may identify a plurality of target sub-images corresponding to the fifth barcode from the plurality of sub-images, wherein each target sub-image includes M pixel points, a difference between a gradient direction of a pixel value of N pixel points in the M pixel points and a first angle is within a preset angle range, and a ratio of a sum of gradient amplitude values of the pixel values of the N pixel points to a sum of gradient amplitude values of the pixel values of the M pixel points is greater than a first preset value.
In the embodiment of the application, the terminal device may calculate the gradient histogram of each sub-image. The value of the gradient direction of the pixel value of each pixel point can be mapped to [0,2 pi ], and in the application, two opposite gradient directions of the pixel value on a straight line are considered to be the same gradient direction of the pixel value, for example, 0 and pi are the gradient directions of the pixel value, 0.5 pi and 1.5 pi are the gradient directions of the pixel value, namely, if the direction theta is more than or equal to pi, the direction is reassigned to be theta=theta-pi. Thus the gradient direction value range of the pixel value can be mapped to 0. The terminal device may then perform histogram statistics on each sub-image, and the horizontal axis of the gradient histogram may be the gradient direction of the pixel value, dividing [0, pi ] into N segments, where N may be typically 12. The vertical axis of the histogram is the sum of the gradient magnitudes, i.e
Wherein p represents a pixel point, mag (p) represents a pixel value gradient amplitude of the pixel point, and Ω i = { p|θ (p) ∈bin (i) }, θ (p) represents the direction of the point, takes the value of [0, pi ], and bin (i) represents the direction range corresponding to the i-th segment after the horizontal axis of the gradient histogram is divided into N segments. The corresponding segment number (value 0, N) with the largest histogram vertical axis (gradient amplitude sum) is defined as the main pixel value gradient direction of the grid, and is recorded as bin a The second largest is the direction of the gradient of the secondary pixel values of the grid, denoted bin b
In the embodiment of the application, the terminal device may calculate a one-dimensional code score of each sub-image, where the score may represent a likelihood that the sub-image belongs to an area where the one-dimensional code is located. For each sub-image, specifically, a one-dimensional code score may be calculated according to the following formula:
where H represents the sum of the vertical axis magnitudes of the gradient histograms over N bins (e.g., 12 bins), H i Is the vertical axis size of the gradient histogram in the i-th bin. One-dimensional code score E 1 Smaller indicates that the corresponding sub-image is more likely to contain the constituent elements of a one-dimensional code. One-dimensional code score threshold epsilon can be set 1 If E 1 <∈ 1 The sub-image is a one-dimensional code component. In this embodiment, h i The closer to H, the angle of the pixel point with large gradient amplitude value change is basically in an angle interval, and the characteristics of one-dimensional codes are met. In this embodiment, each target sub-image includes M pixel points, a difference between a gradient direction of a pixel value of N pixel points in the M pixel points and a first angle is within a preset angle range, where the first angle may be the above-mentioned, a ratio of a sum of gradient magnitude values of the pixel values of the N pixel points to a sum of gradient magnitude values of the pixel values of the M pixel points is greater than a first preset value, and in particular, a ratio of the sum of gradient magnitude values of the pixel values of the N pixel points to the sum of gradient magnitude values of the pixel values of the M pixel points may represent a ratio of gradient magnitude values of the pixel values of all pixel points in the angle range to gradient magnitude values of all pixel points in the sub-image, where a larger ratio indicates that angles of pixel points with greatly changed gradient magnitude values are all substantially within an angle range, and conform to a feature of a one-dimensional code, and the sub-image may be a sub-image corresponding to the one-dimensional code. Referring to fig. 7a, fig. 7a is a schematic diagram of a recognition result of a plurality of sub-images according to an embodiment of the present application As shown in fig. 7a, each one-dimensional code may correspond to a plurality of sub-images 701.
It should be noted that, in this embodiment, the selection of the first preset value may be based on the actual application, which is not limited in this application.
In this embodiment of the present application, after identifying a plurality of sub-images corresponding to a fifth barcode, the terminal device may determine that an circumscribed rectangular area of a plurality of target sub-images corresponding to the fifth barcode is a first area where the fifth barcode is located.
In the embodiment of the present application, in order to connect together a plurality of sub-images including one-dimensional code components, four adjacent sub-images (i.e., (i-1, j), (i+1, j), (i, j-1), (i, j+1)) on top of, below, and on the left and right of one sub-image (i, j) are considered. If sub-image (i, j) is the sub-image corresponding to the fifth barcode and one or more of the four adjacent sub-images are the sub-images corresponding to the fifth barcode, then they are considered to be in communication with each other, in one embodiment, it is desirable that the primary and secondary directions of sub-images (i-1, j) and (i, j) be substantially coincident, and that (i-1, j) and (i, j) be in communication. It should be noted that the judging manner of the primary direction and the secondary direction being basically consistent may be various, for example, a threshold may be preset as a criterion of similarity, which is not limited in this application.
In this embodiment of the present application, the terminal device may calculate, according to the communication area (for example, any one of the multiple target sub-images is adjacent to at least one target sub-image other than the target sub-image), a bounding box coordinate of an circumscribed rectangular area, where the circumscribed rectangular area is a first area where the fifth barcode (first barcode) is located.
In this embodiment of the present application, when a target image has a plurality of one-dimensional codes, sub-images corresponding to different one-dimensional codes may be numbered, referring to fig. 7b, fig. 7b is a schematic diagram of a plurality of sub-image recognition results provided in this embodiment of the present application, and as shown in fig. 7b, each one-dimensional code may correspond to a plurality of sub-images 701, where the target image includes two one-dimensional codes, the number of the sub-image corresponding to one-dimensional code is 1, and the number of the sub-image corresponding to the other one-dimensional code is 2.
2. How the terminal equipment identifies a plurality of sub-images corresponding to the two-dimensional code.
In the embodiment of the application, the terminal device may perform image division on the target image to obtain a plurality of sub-images; identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image; and determining the circumscribed rectangular area of the target sub-images as a second area.
Specifically, the plurality of barcodes may include a sixth barcode, the sixth barcode is a two-dimensional barcode, the terminal device may identify a plurality of target sub-images corresponding to the sixth barcode from the plurality of sub-images, wherein each target sub-image includes M pixel points, a difference between a pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, a difference between a pixel value gradient direction of P pixel points in the M pixel points and a third angle is within a preset angle range, a sum of pixel value gradient amplitude values of the O pixel points and a sum of pixel value gradient amplitude values of the P pixel points is greater than a fourth preset value, and a difference between the second angle and the third angle is within a preset range;
in the embodiment of the application, the terminal device may calculate the gradient histogram of each sub-image. The value of the gradient direction of the pixel value of each pixel point can be mapped to [0,2 pi ], in this application, two opposite gradient directions of the pixel value on a straight line are considered to be the same gradient direction of the pixel value, for example, 0 and pi are one gradient direction of the pixel value, 0.5 pi and 1.5 pi are one gradient direction of the pixel value, that is, if the direction θ is greater than or equal to pi, the direction is reassigned to θ: =θ -pi. Thus the gradient direction value range of the pixel value can be mapped to 0. The terminal device may then perform histogram statistics on each sub-image, and the horizontal axis of the gradient histogram may be the gradient direction of the pixel value, dividing [0, pi ] into N segments, where N may be typically 12. The vertical axis of the histogram is the sum of the gradient magnitudes, i.e
Wherein p represents a pixel point, mag (p) represents a pixel value gradient amplitude of the pixel point, and Ω i = { p|θ (p) ∈bin (i) }, θ (p) represents the direction of the point, takes the value of [0, pi ], and bin (i) represents the direction range corresponding to the i-th segment after the horizontal axis of the gradient histogram is divided into N segments. The corresponding segment number (value 0, N) with the largest histogram vertical axis (gradient amplitude sum) is defined as the main pixel value gradient direction of the grid and is recorded as bin a The second largest is the direction of the gradient of the secondary pixel values of the grid, denoted bin b
In the embodiment of the application, the terminal device may calculate a two-dimensional code score of each sub-image, where the score may represent a possibility that the sub-image belongs to an area where the two-dimensional code is located. For each sub-image, specifically, the two-dimensional code score may be calculated according to the following formula:
wherein, two-dimensional code score E 2 The larger the code is more likely to contain two-dimensional code composition components, the two-dimensional code score threshold epsilon can be set 2 If E 2 >∈ 2 The sub-image is a two-dimensional code component. In this embodiment, the main pixel value gradient direction bin a And a secondary pixel value gradient direction bin b The closer the difference is to 90 degrees, the more the characteristics of the two-dimensional code are met. In this embodiment of the present application, the difference between the gradient direction of the pixel value of the O pixel points and the second angle is within a preset angle range, the difference between the gradient direction of the pixel value of the P pixel points and the third angle is within a preset angle range, the sum of the gradient magnitude values of the pixel values of the O pixel points and the sum of the gradient magnitude values of the pixel values of the P pixel points is greater than a fourth preset value, and the difference between the second angle and the third angle is within a preset range, the second angle may be regarded as the primary gradient direction bin a The third angle can be considered as the secondary pixel value gradient direction bin b The difference between the second angle and the third angle is within a preset range, which may be an angle range close to 90 degrees, for example between 85 and 95 degrees. It should be noted that, in this embodiment, the selection of the preset angle range may be based on the actual application, which is not limited in this application.
In this embodiment of the present application, after identifying a plurality of target sub-images corresponding to a sixth barcode, the terminal device may determine that an circumscribed rectangular area of the plurality of target sub-images corresponding to the sixth barcode is a second area where the sixth barcode is located.
In the embodiment of the application, in order to connect a plurality of sub-images including two-dimensional code components together, four adjacent sub-images (i.e., (i-1, j), (i+1, j), (i, j-1), (i, j+1)) on top of, below, and on the left and right of one sub-image (i, j) are considered. If sub-image (i, j) is the sub-image corresponding to the fifth barcode and one or more of the four adjacent sub-images are the sub-images corresponding to the sixth barcode, then they are considered to be in communication with each other, in one embodiment, it is desirable that the primary and secondary directions of sub-images (i-1, j) and (i, j) be substantially coincident, and that (i-1, j) and (i, j) be in communication. It should be noted that the judging manner of the primary direction and the secondary direction being basically consistent may be various, for example, a threshold may be preset as a criterion of similarity, which is not limited in this application.
In this embodiment of the present application, the terminal device may calculate, according to the communication area, a bounding box coordinate of a circumscribed rectangular area, where the circumscribed rectangular area is a first area where the sixth barcode (the second barcode) is located.
In the embodiment of the application, when a plurality of two-dimensional codes exist in the target image, sub-images corresponding to different two-dimensional codes can be numbered.
In this embodiment of the present application, after obtaining the scores (may be one-dimensional code scores or two-dimensional code scores) of the multiple sub-images corresponding to each barcode, the terminal device may calculate the score of the corresponding barcode based on the scores of the multiple sub-images corresponding to each barcode, and output the score of the barcode. In particular, an averaging method (i.e., calculating an average of scores of a plurality of sub-images) or the like may be used, but is not limited thereto. The score of the barcode may be calculated according to one or more of the contrast, pixel value, and other information of each sub-image, which is not limited herein.
In this embodiment of the present application, a sum of pixel value gradient amplitude values of the M pixel points is greater than a third preset value. Specifically, the terminal device may also perform grid gradient magnitude filtering of the sub-image. For each sub-image, calculating the sum of the pixel value gradient amplitudes of all pixel points of the sub-image, and if the sum of the pixel value gradient amplitudes is lower than a preset threshold value, considering that the sub-image does not contain the components of the one-dimensional code and the two-dimensional code.
In this embodiment of the present application, in a scene, one or more barcodes may be included in the target image, where the one or more barcodes have a certain rotation angle with respect to the target image, and the rotation angle may be understood that a minimum included angle between a boundary line of the barcode and an edge line of the target image is greater than a first preset angle, and in the prior art, the barcodes having the certain rotation angle cannot be identified. The first preset angle may be determined according to the recognition capability of the terminal device and the type of the barcode, for example, if the minimum included angle between the boundary line of the one-dimensional code and the edge line of the target image is greater than 10 degrees in the target image acquired by the terminal device, the terminal device cannot recognize the one-dimensional code, and the first preset angle may be determined to be 10 degrees, for example, if the minimum included angle between the boundary line of the two-dimensional code and the edge line of the target image is greater than 5 degrees in the target image acquired by the terminal device, the terminal device cannot recognize the one-dimensional code, and the first preset angle may be determined to be 5 degrees.
In this embodiment, the outer contour of the barcode is rectangular, and correspondingly, the first area or the second area corresponding to the barcode is also rectangular, which includes four border lines, the target image may include a horizontal axis and a vertical axis, which are respectively parallel to two of the edge lines of the target image, as shown in fig. 8c, the border lines 804 of the one-dimensional code have four border lines, and the minimum included angle between each border line 804 and the horizontal axis of the target image is It should be noted that, in the embodiment of the present application, the minimum included angle refers to an included angle between the boundary line of the region and the horizontal axis of the image, which is greater than zero and less than 90 degrees.
In order to solve the above-mentioned problem, in the embodiment of the present application, if a minimum included angle between a boundary line of a second target area and a transverse axis direction of the target image is greater than a first preset angle, the terminal device rotates the second target area, so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is less than the first preset angle, and the second target area is one of the M first areas and the N second areas.
The terminal device may perform morphological operation on a second target area (a first area or a second area), extract a maximum communication area obtained after the morphological operation based on an area communication algorithm, determine that a boundary of the maximum communication area is a boundary line of a barcode corresponding to the second target area, and rotate the second target area based on the boundary line of the barcode corresponding to the second target area and the boundary line of the second target area, so that a minimum included angle between the boundary line of the second target area and a transverse axis direction of the target image is smaller than the first preset angle.
Optionally, the terminal device may rotate the obtained second target area with the rotation angle, and further decode the rotated second target area.
In this embodiment of the present application, after obtaining the second target area, the terminal device may perform a morphological operation on the gradient magnitude of the pixel value of the second target area, where the morphological operation may be, but is not limited to, a binary dilation, a erosion operation, and so on. Then, a connected domain extraction algorithm may be used to obtain the largest connected domain in the morphologically operated image, and the connected domain extraction algorithm may be, but is not limited to, a contour extraction algorithm, and the like. Referring to fig. 8a and 8b, fig. 8a and 8b show an image processing schematic in this embodiment, fig. 8a shows an image obtained by performing morphological operation on the obtained gradient magnitude of the pixel value, where the image includes a maximum connected region 801, and the terminal device may determine that a boundary 804 of a circumscribed rectangle 803 of the maximum connected region 801 is a boundary line of a barcode corresponding to the second target region, as shown in fig. 8b, where the boundary of the circumscribed rectangle 803 is a boundary line 804 of a barcode corresponding to the second target region. Then, the terminal device may rotate the second target area counterclockwise, where the angle that the terminal device makes when parallel to one side of the circumscribed rectangle 803 for the first time is the deflection angle a of the circumscribed rectangle 803, and for the two-dimensional code, the terminal device directly rotates the second target area clockwise by a when performing angle correction. For one-dimensional codes, each sub-image has a main direction (see description of the above embodiments), and the average value of the main directions of each sub-image in the second target area or the main direction obtained by voting is denoted as PB as the main direction of the area. If PB >0 and PB < N/2, then the second target region is directly rotated clockwise by a. If PB > =n/2 and PB < N, the second target region is directly rotated clockwise (a+90°).
In this embodiment, the minimum included angle between the boundary line 804 of the first barcode and the transverse axis of the target image isReferring to FIG. 8c, FIG. 8c is a schematic illustration of a target image, as shown in FIG. 8c, with edge lines of the target image805 are four, and the minimum included angle between the boundary line 804 of the first bar code and the horizontal axis of the target image is +.>
In this embodiment of the present application, after determining that a minimum included angle between a boundary line of a second target area and a lateral axis direction of the target image is greater than a first preset angle, the terminal device may rotate the target area, so that the minimum included angle between the boundary line of the second target area and the lateral axis direction of the target image is less than the first preset angle.
In this embodiment, the terminal device obtains a second target area in the target image, and rotates the second target area, for example, the second target area may be obtained separately from the target image, and rotates the second target area until a minimum included angle between a boundary line of the second target area and a transverse axis direction of the target image is smaller than the first preset angle. The image content included in the second target region before and after rotation is unchanged, and only the rotation angle is changed.
It should be noted that in the embodiment of the present application, error detection processing may be further performed on the obtained first area and second area, and specifically, at least one of two processing manners of line density and rectangle degree may be used to determine whether the first area or the second area is detected by error. The processing mode of the line density may be to scan the detected image, calculate the black-and-white jump number of the image, mark the first area or the second area as false detection if the jump number is smaller than a certain preset threshold, and reject the first area or the second area judged as false detection. The rectangle degree may be processed by comparing the obtained maximum connected area with the minimum circumscribed rectangle area, if the ratio is smaller than a preset threshold, marking the first area or the second area as false detection, and eliminating the first area or the second area judged as false detection.
In this embodiment of the present application, the terminal device may output boundary coordinates of each of the first area and the second area, and a barcode type (a one-dimensional code or a two-dimensional code) of each area, and for a barcode that rotates with an angle, the terminal device may also output a rotation angle (for example, may be an angle between a boundary line of the barcode and an edge line of the target image).
304. And the terminal equipment decodes the M first areas and the N second areas.
In this embodiment of the present application, after obtaining M first areas including one-dimensional code of the plurality of barcodes and N second areas including one two-dimensional code of the plurality of barcodes, the terminal device may decode each of the M first areas and the N second areas.
In this embodiment of the present application, the terminal device may decode the M first areas based on a decoding rule of a one-dimensional code; and decoding the N second areas based on a decoding rule of the two-dimensional code.
Specifically, for different bar code types, the terminal device can be realized by calling a decoding control corresponding to the bar code type. Since the decoding methods of the bar codes of different code systems are different, a plurality of decoding controls are required to be arranged for decoding the bar codes of different code systems. After the bar code category of the bar code is obtained, a corresponding decoding control is obtained according to the bar code category, and the decoding of the bar code is realized by utilizing the decoding control corresponding to the bar code. In particular, for one-dimensional bar codes, the decoding process is simpler, so that the decoding of one-dimensional bar codes of different code systems can be realized through different program segments in one decoding control. The decoding control for decoding the two-dimensional bar code can exist in a discrete form or can be packaged together. The terminal equipment can analyze characters or data streams according to the bar codes to obtain a plurality of decoding contents, wherein the decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function; wherein the functions include at least one of: jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
It should be noted that, in a scenario where the barcode has a certain rotation angle, the terminal device may decode the rotated first area or second area. It should be noted that when a plurality of barcodes exist in the target image, the terminal device may decode the first area or the second area where a part of the barcodes are located based on the score of each barcode, for example, may decode the first area or the second area where a part of the barcodes are located based on the ranking of the scores.
In this embodiment of the present application, the terminal device may generate, based on each of L decoding contents, a corresponding hint information, where the L decoding contents belong to the m+n decoding contents, and the hint information includes: prompting of a function corresponding to the current decoding content, prompting of a function corresponding to the current decoding content which cannot be triggered, or recommendation information of an application program comprising the function corresponding to the current decoding content which can be triggered; and displaying the L prompt messages, wherein L is a positive integer less than or equal to M+N.
Next, description is made on how the terminal device determines L transcoded contents from m+n transcoded contents.
In one embodiment, the L decoding contents do not include decoding contents whose corresponding bar code is a one-dimensional code.
In this embodiment, at least L two-dimensional codes may be included in m+n barcodes corresponding to m+n decoding contents, and the terminal device may consider that the priority of the two-dimensional codes is greater than that of the one-dimensional codes, that is, the function of outputting the decoding contents corresponding to the two-dimensional codes preferentially.
In one embodiment, the terminal device may obtain the definition of each barcode in the m+n barcodes by performing the definition recognition of the barcodes in the M first areas and the N second areas, and determine the L decoding contents from the plurality of decoding contents by using the definition of each barcode in the m+n barcodes, where the barcode corresponding to each decoding content in the L decoding contents is one of the barcodes of the L before the m+n barcodes are ordered according to the definition from large to small.
In one embodiment, the terminal device may obtain the size of each barcode in the m+n barcodes by identifying the sizes of the barcodes in the M first areas and the N second areas; and determining the L decoding contents from the plurality of decoding contents according to the size of each bar code in the M+N bar codes, wherein the bar codes corresponding to the L decoding contents are the first L bar codes after the M+N bar codes are sequenced from large to small according to the definition.
In one embodiment, the target image is obtained through a barcode recognition function of a target application program, and the terminal device may obtain a frequency of use of a function corresponding to each of the plurality of decoded contents in the target application program; and determining the L decoding contents from the plurality of decoding contents according to the use frequency of the function corresponding to each decoding content, wherein the use frequency of the function corresponding to the L decoding contents is one of the L bar codes before the M+N decoding contents are ordered according to the use frequency.
In this embodiment of the present application, after determining L transcoded contents from m+n transcoded contents, the terminal device may generate corresponding hint information based on each of the L transcoded contents, where the L transcoded contents belong to the m+n transcoded contents, and the hint information includes: and prompting the function corresponding to the current decoding content, prompting the function corresponding to the current decoding content which cannot be triggered or recommending information of an application program comprising the function corresponding to the current decoding content which can be triggered, and displaying the L prompting information.
In this embodiment of the present invention, if the terminal device can trigger the function corresponding to the decoded content, a prompt for the function corresponding to the decoded content may be displayed, and the content of the prompt information is related to the function corresponding to the decoded content, for example, for a function of jumping to another website, the prompt information may be a website name.
In the embodiment of the present application, if the terminal device cannot trigger the function corresponding to the decoded content, a prompt that the function corresponding to the decoded content cannot be triggered may be displayed, for example, the terminal device may display "the barcode cannot be identified"; or the terminal device may display recommended information of an application program including a function corresponding to the triggerable transcoded content, for example, the terminal device may display "the barcode can be recognized in the a application".
The display position of the hint information is described next.
In this embodiment of the present application, a distance between a display position of each prompt message and a corresponding first target area is within a preset range, where the first target area is one of the M first areas and the N second areas. Referring to fig. 8d, as shown in fig. 8d, the distance of the display position of each of the cue messages 806 from the corresponding region is within a preset range, which may be set so that the association between the cue messages and the region can be visually seen based on the display position of each of the cue messages.
In this embodiment of the present application, the terminal device may further display an association identifier, where the association identifier is used to indicate an association relationship between each piece of prompt information and a corresponding first target area, where the first target area is one of the M first areas and N second areas. In this embodiment, the association identifier may indicate the prompt information and the corresponding prompt information. Referring to fig. 8e, as shown in fig. 8e, the association identifier is a character, for example, "1", where one character "1" is located near the first target area, one character "1" is located near the prompt message, and the association relationship between the first target area and the corresponding prompt message is established corresponding to the character "1". It should be noted that the above-mentioned association identifier is only an illustration, and in practical application, a specific implementation manner of the association identifier is not limited.
In the embodiment of the application, the terminal device may display a prompt message corresponding to each of the L decoding contents, for the user to select, and the user may select the decoding content that is desired to be opened based on an operation on a certain prompt message, and the terminal device may receive a selection instruction of the user, where the selection instruction indicates a selection of a target prompt message, and the target prompt message is one of the L prompt messages; and responding to the selection instruction, and triggering the function of decoding content corresponding to the target prompt information.
Particularly, if a selection instruction of the user for the L prompt messages is not received within a preset time, determining a first decoding content from the m+n decoding contents, where the frequency of use of the function corresponding to the first decoding content is the highest frequency of use in the plurality of decoding contents, the definition of the bar code corresponding to the first decoding content is the highest definition in the plurality of decoding contents, or the size of the bar code corresponding to the first decoding content is the largest size in the plurality of decoding contents; the target image is acquired through a target application program, the use frequency is the use frequency of the target application program for the function corresponding to the decoding content, and the function corresponding to the first decoding content is triggered.
That is, in this embodiment, if the terminal device receives a selection instruction of a user for one of the L prompt messages within a preset time, the function of decoding the content corresponding to the target prompt message is triggered in response to the selection instruction. If the terminal equipment does not receive a selection instruction of any prompt information in the L prompt information from a user within a preset time, determining first decoded content from the M+N decoded contents based on a certain rule, and triggering a function corresponding to the first decoded content.
The embodiment of the application provides a multi-bar code identification method, which comprises the following steps: acquiring a target image, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes; acquiring gradient information of the target image; determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers; the M first regions and N second regions are coded. By the method, the terminal equipment identifies the bar code type of each bar code and the area where each bar code is located, and decodes each area, so that the terminal equipment can identify various bar codes included in the same image.
Referring to fig. 9, fig. 9 is a schematic diagram of a barcode identification method provided in an embodiment of the present application, and as shown in fig. 9, the barcode identification method provided in the embodiment of the present application includes:
901. the terminal equipment acquires a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other.
The description of step 901 may refer to, but is not limited to, the description of step 301, and will not be repeated here.
902. And the terminal equipment decodes M target bar codes in the plurality of bar codes to obtain M decoded contents, wherein M is a positive integer.
The description of step 902 may refer to, but is not limited to, the descriptions of steps 302 to 304, and will not be repeated here.
903. The terminal equipment generates corresponding prompt information based on each decoding content.
The description of step 903 may refer to, but is not limited to, the description of generating the corresponding hint information based on each decoded content in the corresponding embodiment of fig. 3, which is not repeated here.
904. The terminal equipment outputs M prompt messages.
The description of step 904 may refer to, but is not limited to, the description about outputting M pieces of prompt information in the corresponding embodiment of fig. 3, which is not repeated here.
Optionally, a selection instruction of a user is received, wherein the selection instruction indicates selection of target prompt information, and the target prompt information is one of the M prompt information;
Responding to the selection instruction, triggering the function of the decoding content corresponding to the target prompt information, wherein the function at least comprises one of the following steps:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, identifying that the plurality of barcodes includes M two-dimensional codes and N one-dimensional codes;
and determining the M two-dimensional codes as the M target bar codes.
Optionally, obtaining the definition of each barcode in the plurality of barcodes;
and determining M barcodes with front definition in the plurality of barcodes as the M target barcodes.
Optionally, acquiring the size of each barcode in the plurality of barcodes;
and determining M barcodes with the front sizes as M target barcodes.
Optionally, the target image is acquired through a barcode recognition function of the target application program, and the method further includes:
acquiring the use frequency of each bar code in the plurality of bar codes in the target application program;
and determining M barcodes with the front using frequency in the plurality of barcodes as the M target barcodes.
Optionally, the target image is obtained through a barcode recognition function of the target application, and the generating the corresponding prompt information based on each decoded content includes:
if the function of the first decoding content cannot be triggered by the target application program, generating first prompt information, wherein the first prompt information comprises a prompt that the function corresponding to the decoding content cannot be triggered or recommendation information of the application program that the function corresponding to the decoding content can be triggered;
if the function of the first decoding content can be triggered by the target application program, generating second prompt information, wherein the second prompt information comprises a prompt of the function corresponding to the first decoding content;
wherein the first transcoded content is one of the plurality of transcoded contents.
The embodiment of the application provides a bar code identification method, which comprises the following steps: the terminal equipment acquires a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other. And the terminal equipment decodes M target bar codes in the plurality of bar codes to obtain M decoded contents, wherein M is a positive integer. The terminal equipment generates corresponding prompt information based on each decoding content. The terminal equipment outputs M prompt messages. By the method, the terminal equipment can identify a plurality of bar codes included in the same image and select the bar codes for a user.
Referring to fig. 10, fig. 10 is a schematic diagram of a barcode identification method provided in an embodiment of the present application, and as shown in fig. 10, the barcode identification method provided in the embodiment of the present application includes:
1001. the terminal equipment acquires a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other.
The description of step 1001 may refer to the description of step 301, and will not be repeated here.
1002. The terminal equipment decodes the plurality of bar codes to obtain decoding contents respectively corresponding to the plurality of bar codes;
the description of step 1002 may refer to the descriptions of steps 302 to 304, and will not be repeated here.
1003. And the terminal equipment determines a target bar code from the plurality of bar codes according to the decoding content respectively corresponding to the plurality of bar codes.
In this embodiment of the present application, the terminal device may determine, according to the decoding contents respectively corresponding to the plurality of barcodes, a target barcode from the plurality of barcodes, and specifically, the terminal device may obtain a frequency of use of a function corresponding to each decoding content in the target application program; and determining the bar code with the highest using frequency in the plurality of bar codes as the target bar code. That is, in the embodiment of the present application, the terminal device may select the target barcode from the plurality of barcodes based on the frequency of the barcode, and output the decoding content corresponding to the target barcode.
1004. The terminal equipment triggers the function of decoding content corresponding to the target bar code; wherein the functions include at least one of: jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
The embodiment of the application provides a bar code identification method, which comprises the following steps: the terminal equipment acquires a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other; the terminal equipment decodes the plurality of bar codes to obtain decoding contents respectively corresponding to the plurality of bar codes; and the terminal equipment determines a target bar code from the plurality of bar codes according to the decoding content respectively corresponding to the plurality of bar codes. By the method, the terminal equipment can identify a plurality of bar codes included in the same image and trigger one bar code.
Referring to fig. 11, fig. 11 is a schematic diagram of a barcode identification method provided in an embodiment of the present application, and as shown in fig. 11, the barcode identification method provided in the embodiment of the present application includes:
1101. the terminal equipment displays a first control and a second control, wherein the first control is used for triggering a first bar code identification method, the second control is used for triggering a second bar code identification method, and the first bar code identification method comprises the following steps: decoding at least two barcodes in an image comprising a plurality of barcodes, the second barcode identification method comprising: one barcode in an image including a plurality of barcodes is decoded.
In the embodiment of the application, the first control and the second control may be different controls displayed on the same position, and the user may switch the display of the first control and the second control based on clicking.
In this embodiment of the present application, the terminal device may display two controls (a first control and a second control) corresponding to two modes, where the first control is used to trigger a first barcode identification method, and the second control is used to trigger a second barcode identification method, and the first barcode identification method includes: decoding at least two barcodes in an image comprising a plurality of barcodes, the second barcode identification method comprising: one barcode in an image including a plurality of barcodes is decoded.
In addition, the first barcode recognition method may further decode the one barcode in the image including the one barcode, and the second barcode recognition method may further decode the one barcode in the image including the one barcode.
1102. And if the terminal equipment receives a first selection operation of the user on the first control, executing the first bar code identification method on the acquired target image.
1103. And if the terminal equipment receives a second selection operation of the user on the second control, executing the second bar code identification method on the acquired target image.
In this embodiment, after the user performs the first selection operation on the first control, if the target image acquired by the terminal device includes a plurality of barcodes, at least two barcodes of the plurality of barcodes may be decoded, and descriptions of how to decode at least two barcodes of the plurality of barcodes may be referred to in steps 301 to 304 are not repeated here. After the user performs the second selection operation on the second control, if the target image acquired by the terminal device includes a plurality of barcodes, decoding one barcode of the plurality of barcodes may be performed, and specifically, reference may be made to any single code identification method in the prior art, which is not described herein. After the user performs the first selection operation on the first control, if the target image acquired by the terminal device includes one barcode, the unique one barcode in the target image may be decoded, and specifically, reference may be made to any single code identification method in the prior art, which is not described herein. After the user executes the second selection operation on the second control, if the target image acquired by the terminal equipment comprises one bar code, the unique bar code in the target image can be decoded.
Optionally, the terminal device may acquire gradient information of an image including a plurality of barcodes;
determining M first areas and N second areas from the image comprising the plurality of bar codes according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
the M first regions and N second regions are coded.
Optionally, the terminal device may decode the M first areas based on a decoding rule of the one-dimensional code; and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, the terminal device may obtain m+n decoding contents obtained after decoding, where the m+n decoding contents include M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content includes a character string, and each decoding content is used to trigger a corresponding function; wherein the functions include at least one of: jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
In this embodiment, descriptions of the same or similar technical features as those in the embodiment corresponding to fig. 3 are not repeated.
In this embodiment, the terminal device displays a first control and a second control, where the first control is used to trigger a first barcode identification method, and the second control is used to trigger a second barcode identification method, and the first barcode identification method includes: decoding at least two barcodes in an image comprising a plurality of barcodes, the second barcode identification method comprising: decoding one bar code in an image comprising a plurality of bar codes; if the terminal equipment receives a first selection operation of the user on the first control, executing the first bar code identification method on the acquired target image; and if the terminal equipment receives a second selection operation of the user on the second control, executing the second bar code identification method on the acquired target image. In the mode, the selection of single code identification and multi-code identification is provided for the user.
Next, a specific example is used to further describe the multi-barcode identification method provided in the present application, and referring to fig. 12a, fig. 12a is a schematic application example of the multi-barcode identification method provided in the embodiment of the present application, and as shown in fig. 12a, the multi-barcode identification method provided in the embodiment of the present application includes:
1201. The terminal device acquires a target image.
In one embodiment, a user may open a barcode recognition function on some application program and select a barcode scanning function, where the function may call a camera on the terminal device, that is, the user may call the barcode scanning function through a third party application program installed on the terminal device, and the user may shoot a certain area through the camera of the terminal device, where the area is provided with a plurality of barcodes, so that the terminal device may obtain a target image including the plurality of barcodes.
The terminal device can also acquire a target image comprising a plurality of bar codes from a local album or a cloud album. In this embodiment, a user may open a barcode recognition function on some application program and select an image from a local album or a cloud album, where the function may open the local album or the cloud album on the terminal device, and the user may select a target image to be barcode-recognized from the local album or the cloud album on the terminal device, where the target image may include a plurality of barcodes, and further, the terminal device may acquire a target image including a plurality of barcodes from the local album or the cloud album.
If only one bar code exists in the target image acquired by the terminal equipment, the terminal equipment can directly pop up a new window after decoding is successful, and the bar code content is displayed. If the bar code content points to links such as website, a new window is not popped up, and the bar code is directly jumped.
Particularly, before the terminal device acquires the target image, a first control and a second control can be displayed, wherein the first control is used for triggering a first bar code identification method, the second control is used for triggering a second bar code identification method, and the first bar code identification method comprises the following steps: decoding at least two barcodes in an image comprising a plurality of barcodes, the second barcode identification method comprising: one barcode in an image including a plurality of barcodes is decoded. And if the terminal equipment receives a second selection operation of the second control by the user, executing the second bar code recognition method on the acquired target image. Wherein the first barcode recognition method may be implemented as follows steps 1202 to 1203.
1202. The terminal equipment identifies a plurality of bar codes in the target image, and decodes each bar code to obtain the decoded content of each decoding.
The steps of step 1202 may refer to the descriptions of steps 302 to 304 in the above embodiments, and are not repeated here.
1203. The terminal equipment generates corresponding prompt information based on each decoding content and outputs the prompt information.
Referring to fig. 12b to fig. 12d, fig. 12b to fig. 12d are schematic diagrams of a multi-barcode recognition flow provided in this embodiment of the present application, as shown in fig. 12b, a user may scan a certain area including a plurality of barcodes (including two-dimensional codes and one-dimensional code in fig. 12 b), a terminal device may obtain a target image including a plurality of barcodes, obtain an area corresponding to each barcode, decode each area, and obtain a decoding result of each barcode, and as shown in fig. 12c, the terminal device may display, for example, for a first two-dimensional code, since the decoding content is to skip to a page with three-dimensional friends, the prompting information corresponding to the two-dimensional code may be "three-dimensional friends added," and in particular, the prompting information may be displayed near the two-dimensional code. For the second two-dimensional code, the decoding content is concerned about a certain public number, so that the prompt information corresponding to the two-dimensional code can be "add XX public number", and for the one-dimensional code, the prompt information corresponding to the two-dimensional code can be "A commodity bar code" because the decoding content is a product number. The user may select one of the alert messages, the terminal device may receive the corresponding selection instruction and output the decoded content, for example, the user may select the alert message "add Zhang three as friend" shown in fig. 12c, and correspondingly, as shown in fig. 12d, the terminal device may jump to the interface where the add Zhang three is friend. In addition, the page in FIG. 12d may also include a return control that the user may click on to reselect the transcoded content that he wants to view.
It should be noted that the controls and arrangements included in the above interface are only illustrative, and not limiting of the present application.
Next, a multi-barcode recognition method in the present application is described with reference to fig. 13 by using a server as an execution body, and fig. 13 is an exemplary illustration of a multi-barcode recognition method provided in an embodiment of the present application, as shown in fig. 13, where the multi-barcode recognition method provided in the embodiment of the present application includes:
1301. the method comprises the steps that a server receives a target image sent by a terminal device, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes;
1302. the server acquires gradient information of the target image;
1303. the server determines M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
1304. the server decodes the M first regions and the N second regions.
Optionally, the decoding the M first regions and the N second regions includes:
Decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, the method further comprises:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
wherein the functions include at least one of:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, before the decoding each of the M first regions and the N second regions, the method further includes:
and if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, rotating the second target area so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, wherein the second target area is one of the M first areas and the N second areas.
Optionally, the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information represents pixel value variation information of each pixel point of the target image, the pixel value gradient direction represents a pixel value maximum variation direction of each pixel point, and the pixel value gradient magnitude value represents a pixel value variation magnitude of the pixel value maximum variation direction of each pixel point.
Optionally, the determining M first regions and N second regions from the target image according to gradient information includes:
performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a first area.
Optionally, the determining M first regions and N second regions from the target image according to gradient information includes:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a second area.
Optionally, the pixel value gradient amplitude value of each pixel point in the M pixel points is greater than a second preset value.
Optionally, the sum of the pixel value gradient amplitude values of the M pixel points is greater than a third preset value.
Optionally, the method further comprises:
and carrying out morphological operation on the image region where the first bar code is located, extracting a maximum communication region obtained after the morphological operation based on a region communication algorithm, and determining the boundary of the maximum communication region as the boundary line of the first bar code.
The descriptions of steps 1301 to 1304 may refer to the descriptions in the above embodiments, and are not repeated here.
Referring to fig. 14, fig. 14 is a schematic structural diagram of a terminal device provided in the embodiment of the present application, where terminal device 1400 includes:
an acquisition module 1401, configured to acquire a target image, where the target image includes a plurality of barcodes, the plurality of barcodes are not overlapped with each other, and the plurality of barcodes includes a one-dimensional code and a two-dimensional code; acquiring gradient information of the target image;
a determining module 1402, configured to determine M first regions and N second regions from the target image according to the gradient information, where each first region corresponds to one-dimensional code of the plurality of barcodes, each second region corresponds to one two-dimensional code of the plurality of barcodes, and M and N are positive integers;
The decoding module 1403 is configured to decode the M first regions and the N second regions.
Optionally, the decoding module is specifically configured to:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, the acquiring module 1401 is further configured to:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
wherein the functions include at least one of:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, the terminal device further includes: an output module 1404 for:
generating corresponding prompt information based on each of L transcoded content, where the L transcoded content belongs to the m+n transcoded content, and the prompt information includes:
Prompting of a function corresponding to the current decoding content, prompting of a function corresponding to the current decoding content which cannot be triggered, or recommendation information of an application program comprising the function corresponding to the current decoding content which can be triggered;
and displaying the L prompt messages, wherein L is a positive integer less than or equal to M+N.
Optionally, the terminal device further includes:
the receiving module is used for receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of the L prompt information;
the output module is further used for responding to the selection instruction and triggering the function of decoding content corresponding to the target prompt information.
Optionally, the L decoding contents do not include decoding contents in which the corresponding bar code is a one-dimensional code.
Optionally, the acquiring module 1401 is further configured to:
the definition of each bar code in the M+N bar codes is obtained by carrying out the definition identification of the bar codes in the M first areas and the N second areas;
the determining module 1402 is further configured to:
determining the L decoding contents from the plurality of decoding contents according to the definition of each bar code in the M+N bar codes, wherein the bar code corresponding to each decoding content in the L decoding contents is one of the bar codes of L before the M+N bar codes are ranked according to the definition from big to small.
Optionally, the acquiring module is further configured to:
the size of each bar code in the M+N bar codes is obtained by recognizing the sizes of the bar codes in the M first areas and the N second areas;
the determining module is further configured to:
and determining the L decoding contents from the plurality of decoding contents according to the size of each bar code in the M+N bar codes, wherein the bar codes corresponding to the L decoding contents are the first L bar codes after the M+N bar codes are sequenced from large to small according to the definition.
Optionally, the target image is acquired through a barcode recognition function of the target application, and the acquiring module is further configured to:
acquiring the use frequency of the function corresponding to each decoding content in the plurality of decoding contents in the target application program;
the determining module 1402 is further configured to:
and determining the L decoding contents from the plurality of decoding contents according to the use frequency of the function corresponding to each decoding content, wherein the use frequency of the function corresponding to the L decoding contents is one of the L bar codes before the M+N decoding contents are ordered according to the use frequency.
Optionally, the distance between the display position of each prompt message and the corresponding first target area is within a preset range, wherein the first target area is one of the M first areas and the N second areas.
Optionally, the output module 1404 is further configured to:
and displaying an association identifier, wherein the association identifier is used for indicating the association relation between each prompt message and a corresponding first target area, and the first target area is one of the M first areas and the N second areas.
Optionally, the output module 1404 is further configured to:
if a selection instruction of a user for the L prompt messages is not received within a preset time, determining first decoding content from the M+N decoding contents, wherein the use frequency of a function corresponding to the first decoding content is the highest in the plurality of decoding contents, the definition of a bar code corresponding to the first decoding content is the highest in the plurality of decoding contents, or the size of a bar code corresponding to the first decoding content is the largest in the plurality of decoding contents; the target image is acquired through a target application program, and the use frequency is the use frequency of the function corresponding to the decoded content by the target application program;
Triggering the function corresponding to the first decoding content.
Optionally, the terminal device further includes:
and the rotating module is used for rotating the second target area if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, and the second target area is one of the M first areas and the N second areas.
Optionally, the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information represents pixel value variation information of each pixel point of the target image, the pixel value gradient direction represents a pixel value maximum variation direction of each pixel point, and the pixel value gradient magnitude value represents a pixel value variation magnitude of the pixel value maximum variation direction of each pixel point.
Optionally, the determining module 1402 is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a first area.
Optionally, the determining module is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a second area.
The present application further provides a terminal device, referring to fig. 15, fig. 15 is a schematic structural diagram of a terminal device provided in an embodiment of the present application, and as shown in fig. 15, the terminal device 1500 includes:
An acquisition module 1501 for acquiring a target image, the target image including a plurality of bar codes, the plurality of bar codes being non-overlapping with each other;
a decoding module 1502, configured to decode M target barcodes of the plurality of barcodes to obtain M decoded contents, where M is a positive integer;
a generating module 1503, configured to generate corresponding hint information based on each decoded content;
and the output module 1504 is used for outputting M prompting messages.
Optionally, the terminal device further includes:
the receiving module is used for receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of the M prompt information;
the output module is used for responding to the selection instruction and triggering the function of the decoding content corresponding to the target prompt information, and the function at least comprises one of the following steps:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
Optionally, the acquiring module is further configured to:
identifying that the plurality of bar codes comprises M two-dimensional codes and N one-dimensional codes;
The determining module is further configured to:
and determining the M two-dimensional codes as the M target bar codes.
Optionally, the acquiring module is further configured to:
acquiring the definition of each bar code in the plurality of bar codes;
the determining module is further configured to:
and determining M barcodes with front definition in the plurality of barcodes as the M target barcodes.
Optionally, the acquiring module is further configured to:
acquiring the size of each bar code in the plurality of bar codes;
the determining module is further configured to:
and determining M barcodes with the front sizes as M target barcodes.
Optionally, the target image is acquired through a barcode recognition function of the target application, and the acquiring module is further configured to:
acquiring the use frequency of each bar code in the plurality of bar codes in the target application program;
the determining module is further configured to:
and determining M barcodes with the front using frequency in the plurality of barcodes as the M target barcodes.
Optionally, the target image is acquired through a barcode recognition function of the target application, and the generating module is specifically configured to:
if the function of the first decoding content cannot be triggered by the target application program, generating first prompt information, wherein the first prompt information comprises a prompt that the function corresponding to the decoding content cannot be triggered or recommendation information of the application program that the function corresponding to the decoding content can be triggered;
If the function of the first decoding content can be triggered by the target application program, generating second prompt information, wherein the second prompt information comprises a prompt of the function corresponding to the first decoding content;
wherein the first transcoded content is one of the plurality of transcoded contents.
The present application further provides a terminal device, referring to fig. 16, fig. 16 is a schematic structural diagram of a terminal device provided in an embodiment of the present application, and as shown in fig. 16, the terminal device 1600 includes:
an acquisition module 1601, configured to acquire a target image, where the target image includes a plurality of bar codes, and the plurality of bar codes do not overlap with each other;
the decoding module 1602 is configured to decode the plurality of barcodes to obtain decoded contents corresponding to the plurality of barcodes respectively;
a determining module 1603, configured to determine a target barcode from the plurality of barcodes according to the decoded contents respectively corresponding to the plurality of barcodes;
an output module 1604, configured to trigger a function of decoding content corresponding to the target barcode; wherein the functions include at least one of:
jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
Optionally, the target image is acquired through a barcode recognition function of the target application, and the determining module is specifically configured to:
acquiring the use frequency of the function corresponding to each decoding content in the target application program;
and determining the bar code with the highest using frequency in the plurality of bar codes as the target bar code.
The present application further provides a terminal device, referring to fig. 17, fig. 17 is a schematic structural diagram of a terminal device provided in an embodiment of the present application, and as shown in fig. 17, the terminal device 1700 includes: an output module 1701, a first decoding module 1702 and a second decoding module 1703;
the output module 1701 is configured to display a first control and a second control, where the first control is configured to trigger the first decoding module 1702, the second control is configured to trigger the second decoding module 1703, and the first decoding module 1702 is configured to: decoding at least two barcodes in an image comprising a plurality of barcodes, the second decoding module 1703 is configured to: decoding one bar code in an image comprising a plurality of bar codes;
if a first selection operation of the first control by the user is received, triggering a first decoding module 1702 to decode the acquired target image;
And if receiving a second selection operation of the second control by the user, triggering a second decoding module 1703 to decode the acquired target image.
Optionally, in an implementation of the eighth aspect, the first decoding module 1702 is further configured to:
decoding one bar code in an image comprising the one bar code.
Optionally, in an implementation of the eighth aspect, the second decoding module 1703 is further configured to:
decoding one bar code in an image comprising the one bar code.
Optionally, in an implementation manner of the eighth aspect, the first decoding module 1702 is specifically configured to:
acquiring gradient information of an image comprising a plurality of bar codes;
determining M first areas and N second areas from the image comprising the plurality of bar codes according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
the M first regions and N second regions are coded.
Optionally, in an implementation manner of the eighth aspect, the first decoding module 1702 is specifically configured to:
decoding the M first areas based on a decoding rule of a one-dimensional code;
And decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, in an implementation of the eighth aspect, the first decoding module 1702 is further configured to:
acquiring a plurality of decoding contents obtained after the first decoding module 1702 executes decoding, wherein the plurality of decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function; wherein the functions include at least one of: jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
Referring to fig. 18, fig. 18 is a schematic structural diagram of a server provided in the embodiment of the present application, where a server 1800 includes:
the acquiring module 1801 is configured to receive a target image sent by a terminal device, where the target image includes a plurality of bar codes, the plurality of bar codes do not overlap with each other, and the plurality of bar codes include a one-dimensional code and a two-dimensional code; acquiring gradient information of the target image;
A determining module 1802, configured to determine M first areas and N second areas from the target image according to the gradient information, where each first area corresponds to one-dimensional code of the plurality of barcodes, each second area corresponds to one two-dimensional code of the plurality of barcodes, and M and N are positive integers;
and a decoding module 1803, configured to decode the M first regions and the N second regions.
Optionally, the decoding module is specifically configured to:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
Optionally, the acquiring module is further configured to:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
wherein the functions include at least one of:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
Or, playing the corresponding audio.
Optionally, the terminal device further includes:
and the rotating module is used for rotating the second target area if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, and the second target area is one of the M first areas and the N second areas.
Optionally, the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information represents pixel value variation information of each pixel point of the target image, the pixel value gradient direction represents a pixel value maximum variation direction of each pixel point, and the pixel value gradient magnitude value represents a pixel value variation magnitude of the pixel value maximum variation direction of each pixel point.
Optionally, the determining module is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a first area.
Optionally, the determining module is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a second area.
Next, referring to fig. 19, fig. 19 is a schematic structural diagram of a terminal device provided in an embodiment of the present application, where terminal device 1900 may specifically be represented by a virtual reality VR device, a mobile phone, a tablet, a notebook, an intelligent wearable device, etc., which is not limited herein. Specifically, the terminal device 1900 includes: receiver 1901, transmitter 1902, processor 1903, and memory 1904 (where the number of processors 1903 in terminal device 1900 may be one or more, one processor is illustrated in fig. 19), where processor 1903 may include application processor 19031 and communication processor 19032. In some embodiments of the present application, the receiver 1901, transmitter 1902, processor 1903, and memory 1904 may be connected by a bus or other means.
Memory 1904 may include read only memory and random access memory and provides instructions and data to processor 1903. A portion of the memory 1904 may also include non-volatile random access memory (non-volatile random access memory, NVRAM). The memory 1904 stores a processor and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, wherein the operating instructions may include various operating instructions for implementing various operations.
The processor 1903 controls the operation of the terminal device. In a specific application, the individual components of the terminal device are coupled together by a bus system, which may comprise, in addition to a data bus, a power bus, a control bus, a status signal bus, etc. For clarity of illustration, however, the various buses are referred to in the figures as bus systems.
The methods disclosed in the embodiments of the present application may be applied to the processor 1903 or implemented by the processor 1903. The processor 1903 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or instructions in software in the processor 1903. The processor 1903 may be a general-purpose processor, a digital signal processor (digital signal processing, DSP), a microprocessor, or a microcontroller, and may further include an application specific integrated circuit (application specific integrated circuit, ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The processor 1903 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1904, and the processor 1903 reads the information in the memory 1904 and, in combination with its hardware, performs the steps of the method described above.
The receiver 1901 may be used to receive input numeric or character information and to generate signal inputs related to the relevant settings and function control of the terminal device. The transmitter 1902 may be configured to output numeric or character information via a first interface; the transmitter 1902 may be further configured to send instructions to the disk stack via the first interface to modify data in the disk stack; the transmitter 1902 may also include a display device such as a display screen.
In one case, the processor 1903 is configured to perform the steps related to the processing in the multi-barcode recognition method in the foregoing embodiment.
Referring to fig. 20, fig. 20 is a schematic structural diagram of the server according to the embodiment of the present application, where the server may have a relatively large difference due to different configurations or performances, and may include one or more central processing units (central processing units, CPU) 2022 (e.g., one or more processors) and a memory 2032, and one or more storage media 2030 (e.g., one or more mass storage devices) storing application programs 2042 or data 2044. Wherein the memory 2032 and the storage medium 2030 may be transitory or persistent. The program stored on the storage medium 2030 may include one or more modules (not shown), each of which may include a series of instruction operations on the training device. Still further, the central processor 2022 may be arranged to communicate with a storage medium 2030, and execute a series of instruction operations in the storage medium 2030 on the server 2000.
The server 2000 may also include one or more power supplies 2026, one or more wired or wireless network interfaces 2050, one or more input/output interfaces 2058, and/or one or more operating systems 2041 such as Windows server (tm), mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
In the embodiment of the present application, the central processor 2022 is configured to perform the multi-barcode recognition method described in the above embodiment.
Embodiments of the present application also provide a method comprising the steps of causing a computer to perform the multi-barcode recognition method when the method is run on the computer.
Also provided in the embodiments of the present application is a computer-readable storage medium having stored therein a program for performing signal processing, which when run on a computer, causes the computer to perform the steps of the multi-barcode recognition method in the method as described in the previous embodiments.
It should be further noted that the above-described apparatus embodiments are merely illustrative, and that the units described as separate units may or may not be physically separate, and that units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the application, the connection relation between the modules represents that the modules have communication connection therebetween, and can be specifically implemented as one or more communication buses or signal lines.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general purpose hardware, or of course may be implemented by dedicated hardware including application specific integrated circuits, dedicated CPUs, dedicated memories, dedicated components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions can be varied, such as analog circuits, digital circuits, or dedicated circuits. However, a software program implementation is a preferred embodiment in many cases for the present application. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk of a computer, etc., including several instructions for causing a computer device (which may be a personal computer, a training device, or a network device, etc.) to perform the method described in the embodiments of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, training device, or data center to another website, computer, training device, or data center via a wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a training device, a data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.

Claims (64)

1. A method of multi-barcode recognition, the method comprising:
acquiring a target image, wherein the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes;
acquiring gradient information of the target image;
determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers; the gradient information comprises pixel value gradient amplitude values and pixel value gradient directions, one second area of the N second areas comprises a plurality of target sub-images, each target sub-image comprises M pixel points, the difference between the pixel value gradient directions of O pixel points of the M pixel points and the second angle is in a preset angle range, the difference between the pixel value gradient directions of P pixel points of the M pixel points and the third angle is in a preset angle range, and the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value;
The M first regions and N second regions are coded.
2. The method of claim 1, wherein the coding the M first regions and N second regions comprises:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
3. The method according to claim 1, wherein the method further comprises:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
wherein the functions include at least one of:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
4. A method according to claim 3, characterized in that the method further comprises:
generating corresponding prompt information based on each of L transcoded content, where the L transcoded content belongs to the m+n transcoded content, and the prompt information includes:
Prompting of a function corresponding to the current decoding content, prompting of a function corresponding to the current decoding content which cannot be triggered, or recommendation information of an application program comprising the function corresponding to the current decoding content which can be triggered;
and displaying the L prompt messages, wherein L is a positive integer less than or equal to M+N.
5. The method according to claim 4, wherein the method further comprises:
receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of the L prompt information;
and responding to the selection instruction, and triggering the function of decoding content corresponding to the target prompt information.
6. The method of claim 4 or 5, wherein the L transcoded content does not include transcoded content in which the corresponding barcode is a one-dimensional code.
7. The method according to claim 4 or 5, characterized in that the method further comprises:
the definition of each bar code in the M+N bar codes is obtained by carrying out the definition identification of the bar codes in the M first areas and the N second areas;
and determining the L decoding contents from M+N decoding contents according to the definition of each bar code in the M+N bar codes, wherein the bar codes corresponding to the L decoding contents are the first L bar codes after the M+N bar codes are sequenced from large to small according to the definition.
8. The method according to claim 4 or 5, characterized in that the method further comprises:
the size of each bar code in the M+N bar codes is obtained by recognizing the sizes of the bar codes in the M first areas and the N second areas;
and determining the L decoding contents from M+N decoding contents according to the size of each bar code in the M+N bar codes, wherein the bar codes corresponding to the L decoding contents are the first L bar codes after the M+N bar codes are ordered from large to small according to the size.
9. The method of claim 4 or 5, wherein the target image is acquired by a barcode recognition function of a target application, the method further comprising:
acquiring the use frequency of the function corresponding to each decoding content in M+N decoding contents in the target application program;
and determining the L decoding contents from the M+N decoding contents according to the use frequency of the function corresponding to each decoding content, wherein the use frequency of the function corresponding to the L decoding contents is the first L bar codes after the M+N decoding contents are ordered from big to small according to the use frequency.
10. The method according to claim 4 or 5, wherein a distance between a display position of each prompt message and a corresponding first target area is within a preset range, and the first target area is one of the M first areas and the N second areas.
11. The method according to claim 4 or 5, characterized in that the method further comprises:
and displaying an association identifier, wherein the association identifier is used for indicating the association relation between each prompt message and a corresponding first target area, and the first target area is one of the M first areas and the N second areas.
12. The method according to claim 4, wherein the method further comprises:
if a selection instruction of a user for the L prompt messages is not received within a preset time, determining first decoding content from the M+N decoding contents, wherein the use frequency of a function corresponding to the first decoding content is the highest in the M+N decoding contents, the definition of a bar code corresponding to the first decoding content is the highest in the M+N decoding contents, or the size of a bar code corresponding to the first decoding content is the largest in the M+N decoding contents; the target image is acquired through a target application program, and the use frequency is the use frequency of the function corresponding to the decoded content by the target application program;
triggering the function corresponding to the first decoding content.
13. The method of any of claims 1-5, wherein prior to decoding each of the M first regions and N second regions, the method further comprises:
and if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, rotating the second target area so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, wherein the second target area is one of the M first areas and the N second areas.
14. The method according to any one of claims 1 to 5, wherein the gradient information indicates pixel value change information of each pixel point of the target image, the pixel value gradient direction indicates a pixel value maximum change direction of each pixel point, and the pixel value gradient magnitude value indicates a pixel value change magnitude of the pixel value maximum change direction of each pixel point.
15. The method of claim 14, wherein determining M first regions and N second regions from the target image based on gradient information comprises:
Performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a first area.
16. The method of claim 14, wherein determining M first regions and N second regions from the target image based on gradient information comprises:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein the difference between the second angle and the third angle is within a preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a second area.
17. A method of multi-barcode recognition, the method comprising:
acquiring a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other;
decoding M target bar codes in the plurality of bar codes to obtain M decoding contents, wherein M is a positive integer; the M decoding contents are obtained by decoding M areas determined from the target image according to gradient information of the target image, the gradient information comprises pixel value gradient amplitude values and pixel value gradient directions, one area of the M areas comprises a plurality of target sub-images, each target sub-image comprises N pixel points, the difference between the pixel value gradient directions of the O pixel points in the N pixel points and the second angle is within a preset angle range, the difference between the pixel value gradient directions of the P pixel points in the N pixel points and the third angle is within a preset angle range, and the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value;
Generating corresponding prompt information based on each decoded content, wherein the prompt information comprises:
information prompting a function corresponding to the current decoding content, information prompting that the function corresponding to the current decoding content cannot be triggered, or recommendation information of an application program comprising the function corresponding to the current decoding content can be triggered;
and outputting the generated M prompting messages.
18. The method of claim 17, wherein the method further comprises:
receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of M prompt information;
responding to the selection instruction, triggering the function of the decoding content corresponding to the target prompt information, wherein the function at least comprises one of the following steps:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
19. The method according to claim 17 or 18, characterized in that the method further comprises:
identifying that the plurality of bar codes comprises M two-dimensional codes and N one-dimensional codes;
and determining the M two-dimensional codes as the M target bar codes.
20. The method according to claim 17 or 18, characterized in that the method further comprises:
acquiring the definition of each bar code in the plurality of bar codes;
and determining the first M barcodes sequenced from large to small according to the definition as the M target barcodes.
21. The method according to claim 17 or 18, characterized in that the method further comprises:
acquiring the size of each bar code in the plurality of bar codes;
and determining the first M barcodes which are sequenced from big to small according to the sizes of the plurality of barcodes as the M target barcodes.
22. The method of claim 17 or 18, wherein the target image is acquired by a barcode recognition function of a target application, the method further comprising:
acquiring the use frequency of each bar code in the plurality of bar codes in the target application program;
and determining the first M barcodes which are sequenced from big to small according to the using frequency as the M target barcodes.
23. The method according to claim 17 or 18, wherein the target image is acquired by a barcode recognition function of the target application, and the generating the corresponding hint information based on each decoded content includes:
If the function of the first decoding content cannot be triggered by the target application program, generating first prompt information, wherein the first prompt information comprises a prompt which cannot trigger the function corresponding to the first decoding content or recommendation information of the application program which can trigger the function corresponding to the first decoding content;
if the function of the first decoding content is triggered by the target application program, generating second prompt information, wherein the second prompt information comprises a prompt of the function corresponding to the first decoding content;
wherein the first transcoded content is one of a plurality of transcoded contents.
24. A method of bar code identification, the method comprising:
acquiring a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other;
decoding the plurality of bar codes to obtain decoding contents corresponding to the plurality of bar codes respectively; each decoding content is obtained by decoding an area determined from the target image according to gradient information of the target image, the gradient information comprises a pixel value gradient amplitude value and a pixel value gradient direction, the area comprises a plurality of target sub-images, each target sub-image comprises N pixel points, the difference between the pixel value gradient direction of O pixel points in the N pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the N pixel points and a third angle is within a preset angle range, and the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value;
Determining a target bar code from the plurality of bar codes according to the decoding content corresponding to the plurality of bar codes respectively;
triggering the function of decoding content corresponding to the target bar code; wherein the functions include at least one of:
jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
25. The method of claim 24, wherein the target image is obtained by a barcode recognition function of the target application, and wherein determining the target barcode from the plurality of barcodes according to the decoded content corresponding to the plurality of barcodes, respectively, comprises:
acquiring the use frequency of the function corresponding to each decoding content in the target application program;
and determining the bar code with the highest using frequency in the plurality of bar codes as the target bar code.
26. A method of bar code identification, the method comprising:
displaying a first control and a second control, wherein the first control is used for triggering a first bar code identification method, the second control is used for triggering a second bar code identification method, and the first bar code identification method comprises the following steps: decoding at least two barcodes in an image comprising a plurality of barcodes, the second barcode identification method comprising: decoding one bar code in an image comprising a plurality of bar codes; the coding includes: decoding a plurality of areas determined from the image according to gradient information of the image, wherein the gradient information comprises pixel value gradient amplitude values and pixel value gradient directions, the areas comprise a plurality of target sub-images, each target sub-image comprises N pixel points, the difference between the pixel value gradient directions of O pixel points in the N pixel points and a second angle is in a preset angle range, the difference between the pixel value gradient directions of P pixel points in the N pixel points and a third angle is in a preset angle range, and the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value;
If a first selection operation of the user on the first control is received, executing the first bar code identification method on the acquired target image;
and if a second selection operation of the second control by the user is received, executing the second bar code identification method on the acquired target image.
27. The method of claim 26, wherein the first barcode recognition method further comprises:
decoding one bar code in an image comprising the one bar code.
28. The method of claim 26, wherein the second barcode recognition method further comprises:
decoding one bar code in an image comprising the one bar code.
29. The method according to any one of claims 26 to 28, wherein the first barcode recognition method specifically comprises:
acquiring gradient information of an image comprising a plurality of bar codes;
determining M first areas and N second areas from the image comprising the plurality of bar codes according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
The M first regions and N second regions are coded.
30. The method of claim 29, wherein the coding the M first regions and N second regions comprises:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
31. The method of claim 29, further comprising:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function; wherein the functions include at least one of: jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
32. A terminal device, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a target image, the target image comprises a plurality of bar codes, the bar codes are not overlapped with each other, and the bar codes comprise one-dimensional codes and two-dimensional codes; acquiring gradient information of the target image;
The determining module is used for determining M first areas and N second areas from the target image according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers; the gradient information comprises pixel value gradient amplitude values and pixel value gradient directions, one second area of the N second areas comprises a plurality of target sub-images, each target sub-image comprises M pixel points, the difference between the pixel value gradient directions of O pixel points of the M pixel points and the second angle is in a preset angle range, the difference between the pixel value gradient directions of P pixel points of the M pixel points and the third angle is in a preset angle range, and the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value;
and the decoding module is used for decoding the M first areas and the N second areas.
33. The terminal device according to claim 32, wherein the decoding module is specifically configured to:
decoding the M first areas based on a decoding rule of a one-dimensional code;
And decoding the N second areas based on a decoding rule of the two-dimensional code.
34. The terminal device of claim 32, wherein the acquisition module is further configured to:
obtaining M+N decoding contents obtained after decoding, wherein the M+N decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function;
wherein the functions include at least one of:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
35. The terminal device of claim 34, wherein the terminal device further comprises: an output module for:
generating corresponding prompt information based on each of L transcoded content, where the L transcoded content belongs to the m+n transcoded content, and the prompt information includes:
prompting of a function corresponding to the current decoding content, prompting of a function corresponding to the current decoding content which cannot be triggered, or recommendation information of an application program comprising the function corresponding to the current decoding content which can be triggered;
And displaying the L prompt messages, wherein L is a positive integer less than or equal to M+N.
36. The terminal device of claim 35, wherein the terminal device further comprises:
the receiving module is used for receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of the L prompt information;
the output module is further used for responding to the selection instruction and triggering the function of decoding content corresponding to the target prompt information.
37. The terminal device according to claim 35 or 36, wherein the L transcoded content does not include transcoded content in which the corresponding barcode is a one-dimensional code.
38. The terminal device according to claim 35 or 36, wherein the acquisition module is further configured to:
the definition of each bar code in the M+N bar codes is obtained by carrying out the definition identification of the bar codes in the M first areas and the N second areas;
the determining module is further configured to:
determining the L decoding contents from M+N decoding contents according to the definition of each bar code in the M+N bar codes, wherein the bar code corresponding to each decoding content in the L decoding contents is one of the bar codes of L before the M+N bar codes are sequenced from big to small according to the definition.
39. The terminal device according to claim 35 or 36, wherein the acquisition module is further configured to:
the size of each bar code in the M+N bar codes is obtained by recognizing the sizes of the bar codes in the M first areas and the N second areas;
the determining module is further configured to:
and determining the L decoding contents from M+N decoding contents according to the size of each bar code in the M+N bar codes, wherein the bar codes corresponding to the L decoding contents are the first L bar codes after the M+N bar codes are sequenced from large to small according to the definition.
40. The terminal device according to claim 35 or 36, wherein the target image is acquired by a barcode recognition function of a target application, and the acquisition module is further configured to:
acquiring the use frequency of the function corresponding to each decoding content in M+N decoding contents in the target application program;
the determining module is further configured to:
and determining the L decoding contents from the M+N decoding contents according to the use frequency of the function corresponding to each decoding content, wherein the use frequency of the function corresponding to the L decoding contents is one of the L bar codes before the M+N decoding contents are sequenced according to the use frequency from big to small.
41. The terminal device according to claim 35 or 36, wherein a distance between a display position of each prompt message and a corresponding first target area is within a preset range, and wherein the first target area is one of the M first areas and the N second areas.
42. The terminal device according to claim 35 or 36, wherein the output module is further configured to:
and displaying an association identifier, wherein the association identifier is used for indicating the association relation between each prompt message and a corresponding first target area, and the first target area is one of the M first areas and the N second areas.
43. The terminal device of claim 35, wherein the output module is further configured to:
if a selection instruction of a user for the L prompt messages is not received within a preset time, determining first decoding content from the M+N decoding contents, wherein the use frequency of a function corresponding to the first decoding content is the highest in the M+N decoding contents, the definition of a bar code corresponding to the first decoding content is the highest in the M+N decoding contents, or the size of a bar code corresponding to the first decoding content is the largest in the M+N decoding contents; the target image is acquired through a target application program, and the use frequency is the use frequency of the function corresponding to the decoded content by the target application program;
Triggering the function corresponding to the first decoding content.
44. A terminal device according to any of claims 32 to 36, characterized in that the terminal device further comprises:
and the rotating module is used for rotating the second target area if the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is larger than a first preset angle, so that the minimum included angle between the boundary line of the second target area and the transverse axis direction of the target image is smaller than the first preset angle, and the second target area is one of the M first areas and the N second areas.
45. The terminal device according to any one of claims 32 to 36, wherein the gradient information includes a pixel value gradient magnitude value and a pixel value gradient direction, the gradient information indicating pixel value variation information of each pixel point of the target image, the pixel value gradient direction indicating a pixel value maximum variation direction of each pixel point, the pixel value gradient magnitude value indicating a pixel value variation magnitude of the pixel value maximum variation direction of each pixel point.
46. The terminal device of claim 45, wherein the determining module is specifically configured to:
Performing image division on the target image to obtain a plurality of sub-images;
determining a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of N pixel points in the M pixel points and a first angle is in a preset angle range, the ratio of the sum of the pixel value gradient amplitude values of the N pixel points to the sum of the pixel value gradient amplitude values of the M pixel points is larger than a first preset value, and any one target sub-image in the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
and determining the circumscribed rectangular area of the target sub-images as a first area.
47. The terminal device of claim 45, wherein the determining module is specifically configured to:
performing image division on the target image to obtain a plurality of sub-images;
identifying a plurality of target sub-images from the plurality of sub-images, wherein each target sub-image comprises M pixel points, the difference between the pixel value gradient direction of O pixel points in the M pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the M pixel points and a third angle is within the preset angle range, the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value, the difference between the second angle and the third angle is within the preset range, and any one of the plurality of target sub-images is adjacent to at least one target sub-image except the target sub-image;
And determining the circumscribed rectangular area of the target sub-images as a second area.
48. A terminal device, characterized in that the terminal device comprises:
the acquisition module is used for acquiring a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other;
the decoding module is used for decoding M target bar codes in the plurality of bar codes to obtain M decoding contents, wherein M is a positive integer; the M decoding contents are obtained by decoding M areas determined from the target image according to gradient information of the target image, the gradient information comprises pixel value gradient amplitude values and pixel value gradient directions, one area of the M areas comprises a plurality of target sub-images, each target sub-image comprises N pixel points, the difference between the pixel value gradient directions of O pixel points in the N pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient directions of P pixel points in the N pixel points and a third angle is within a preset angle range, and the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value;
The generation module is used for generating corresponding prompt information based on each decoding content;
and the output module is used for outputting M prompting messages.
49. The terminal device of claim 48, wherein the terminal device further comprises:
the receiving module is used for receiving a selection instruction of a user, wherein the selection instruction indicates the selection of target prompt information, and the target prompt information is one of the M prompt information;
the output module is used for responding to the selection instruction and triggering the function of the decoding content corresponding to the target prompt information, and the function at least comprises one of the following steps:
jumping to a corresponding webpage;
opening a target function in a corresponding application program;
displaying the corresponding character string;
displaying the corresponding video;
or, playing the corresponding audio.
50. The terminal device of claim 48 or 49, wherein the acquisition module is further configured to:
identifying that the plurality of bar codes comprises M two-dimensional codes and N one-dimensional codes;
the apparatus further comprises: a determining module for:
and determining the M two-dimensional codes as the M target bar codes.
51. The terminal device of claim 48 or 49, wherein the acquisition module is further configured to:
Acquiring the definition of each bar code in the plurality of bar codes;
the apparatus further comprises: a determining module for:
and determining M barcodes with front definition in the plurality of barcodes as the M target barcodes.
52. The terminal device of claim 48 or 49, wherein the acquisition module is further configured to:
acquiring the size of each bar code in the plurality of bar codes;
the apparatus further comprises: a determining module for:
and determining M barcodes with the front sizes as M target barcodes.
53. The terminal device of claim 48 or 49, wherein the target image is acquired by a barcode recognition function of a target application, and wherein the acquisition module is further configured to:
acquiring the use frequency of each bar code in the plurality of bar codes in the target application program;
the apparatus further comprises: a determining module for:
and determining M barcodes with the front using frequency in the plurality of barcodes as the M target barcodes.
54. The terminal device according to claim 48 or 49, wherein the target image is obtained by a barcode recognition function of a target application, and the generating module is specifically configured to:
If the function of the first decoding content cannot be triggered by the target application program, generating first prompt information, wherein the first prompt information comprises a prompt that the function corresponding to the decoding content cannot be triggered or recommendation information of the application program that the function corresponding to the decoding content can be triggered;
if the function of the first decoding content can be triggered by the target application program, generating second prompt information, wherein the second prompt information comprises a prompt of the function corresponding to the first decoding content;
wherein the first transcoded content is one of a plurality of transcoded contents.
55. A terminal device, characterized in that the terminal device comprises:
the acquisition module is used for acquiring a target image, wherein the target image comprises a plurality of bar codes, and the bar codes are not overlapped with each other;
the decoding module is used for decoding the plurality of bar codes to obtain decoding contents corresponding to the plurality of bar codes respectively; each decoding content is obtained by decoding an area determined from the target image according to gradient information of the target image, the gradient information comprises a pixel value gradient amplitude value and a pixel value gradient direction, the area comprises a plurality of target sub-images, each target sub-image comprises N pixel points, the difference between the pixel value gradient direction of O pixel points in the N pixel points and a second angle is within a preset angle range, the difference between the pixel value gradient direction of P pixel points in the N pixel points and a third angle is within a preset angle range, and the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value;
The determining module is used for determining a target bar code from the plurality of bar codes according to the decoding content corresponding to the plurality of bar codes respectively;
the output module is used for triggering the function of the decoding content corresponding to the target bar code; wherein the functions include at least one of:
jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
56. The terminal device of claim 55, wherein the target image is obtained by a barcode recognition function of a target application, and wherein the determining module is specifically configured to:
acquiring the use frequency of the function corresponding to each decoding content in the target application program;
and determining the bar code with the highest using frequency in the plurality of bar codes as the target bar code.
57. A terminal device, characterized in that the terminal device comprises:
the output module is used for displaying a first control and a second control, wherein the first control is used for triggering the first decoding module, the second control is used for triggering the second decoding module, and the first decoding module is used for: decoding at least two barcodes in an image comprising a plurality of barcodes, the second decoding module being configured to: decoding one bar code in an image comprising a plurality of bar codes; the coding includes: decoding a plurality of areas determined from the image according to gradient information of the image, wherein the gradient information comprises pixel value gradient amplitude values and pixel value gradient directions, the areas comprise a plurality of target sub-images, each target sub-image comprises N pixel points, the difference between the pixel value gradient directions of O pixel points in the N pixel points and a second angle is in a preset angle range, the difference between the pixel value gradient directions of P pixel points in the N pixel points and a third angle is in a preset angle range, and the sum of the pixel value gradient amplitude values of the O pixel points and the sum of the pixel value gradient amplitude values of the P pixel points is larger than a fourth preset value;
If a first selection operation of a user on the first control is received, a first decoding module is triggered to decode the acquired target image;
and if receiving a second selection operation of the second control by the user, triggering a second decoding module to decode the acquired target image.
58. The terminal device of claim 57, wherein the first decoding module is further configured to:
decoding one bar code in an image comprising the one bar code.
59. The terminal device of claim 57 or 58, wherein the second decoding module is further configured to:
decoding one bar code in an image comprising the one bar code.
60. The terminal device of claim 57 or 58, wherein the first decoding module is specifically configured to:
acquiring gradient information of an image comprising a plurality of bar codes;
determining M first areas and N second areas from the image comprising the plurality of bar codes according to the gradient information, wherein each first area corresponds to one-dimensional code in the plurality of bar codes, each second area corresponds to one two-dimensional code in the plurality of bar codes, and M and N are positive integers;
The M first regions and N second regions are coded.
61. The terminal device of claim 60, wherein the first decoding module is specifically configured to:
decoding the M first areas based on a decoding rule of a one-dimensional code;
and decoding the N second areas based on a decoding rule of the two-dimensional code.
62. The terminal device of claim 60, wherein the first decoding module is further configured to:
obtaining a plurality of decoding contents obtained after the first decoding module executes decoding, wherein the plurality of decoding contents comprise M decoding contents corresponding to M first areas and N decoding contents corresponding to N second areas, each decoding content comprises a character string, and each decoding content is used for triggering a corresponding function; wherein the functions include at least one of: jumping to a corresponding webpage; opening a target function in a corresponding application program; displaying the corresponding character string; displaying the corresponding video; or, playing the corresponding audio.
63. A computer readable storage medium comprising a program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 31.
64. A terminal device comprising a processor and a memory, said processor being coupled to said memory, characterized in that,
the memory is used for storing programs;
the processor configured to execute a program in the memory, so that the terminal device performs the method according to any one of claims 1 to 31.
CN201911381243.3A 2019-12-27 2019-12-27 Multi-bar code identification method and related equipment Active CN113051950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911381243.3A CN113051950B (en) 2019-12-27 2019-12-27 Multi-bar code identification method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911381243.3A CN113051950B (en) 2019-12-27 2019-12-27 Multi-bar code identification method and related equipment

Publications (2)

Publication Number Publication Date
CN113051950A CN113051950A (en) 2021-06-29
CN113051950B true CN113051950B (en) 2023-07-18

Family

ID=76506966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911381243.3A Active CN113051950B (en) 2019-12-27 2019-12-27 Multi-bar code identification method and related equipment

Country Status (1)

Country Link
CN (1) CN113051950B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564978B (en) * 2022-04-27 2022-07-15 北京紫光青藤微系统有限公司 Method and device for decoding two-dimensional code, electronic equipment and storage medium
CN115331269B (en) * 2022-10-13 2023-01-13 天津新视光技术有限公司 Fingerprint identification method based on gradient vector field and application
TWI827423B (en) * 2022-12-28 2023-12-21 大陸商信揚科技(佛山)有限公司 Scanning method and related devices

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012018494A (en) * 2010-07-07 2012-01-26 Keyence Corp Bar code symbol reader, bar code symbol reading method, and computer program
CN103279383A (en) * 2013-05-31 2013-09-04 北京小米科技有限责任公司 Photographing method with two-dimensional bar code scanning function and photographing system with two-dimensional bar code scanning function
CN107665324A (en) * 2016-07-27 2018-02-06 腾讯科技(深圳)有限公司 A kind of image-recognizing method and terminal
CN109241806A (en) * 2018-08-10 2019-01-18 北京龙贝世纪科技股份有限公司 A kind of multi-code recognition methods and identifying system simultaneously

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012018494A (en) * 2010-07-07 2012-01-26 Keyence Corp Bar code symbol reader, bar code symbol reading method, and computer program
CN103279383A (en) * 2013-05-31 2013-09-04 北京小米科技有限责任公司 Photographing method with two-dimensional bar code scanning function and photographing system with two-dimensional bar code scanning function
CN107665324A (en) * 2016-07-27 2018-02-06 腾讯科技(深圳)有限公司 A kind of image-recognizing method and terminal
CN109241806A (en) * 2018-08-10 2019-01-18 北京龙贝世纪科技股份有限公司 A kind of multi-code recognition methods and identifying system simultaneously

Also Published As

Publication number Publication date
CN113051950A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
US11113523B2 (en) Method for recognizing a specific object inside an image and electronic device thereof
CN113051950B (en) Multi-bar code identification method and related equipment
CN113641271B (en) Application window management method, terminal device and computer readable storage medium
WO2021185232A1 (en) Barcode identification method, and related device
CN115115679A (en) Image registration method and related equipment
CN115437601B (en) Image ordering method, electronic device, program product and medium
CN114943976B (en) Model generation method and device, electronic equipment and storage medium
CN116468882B (en) Image processing method, device, equipment and storage medium
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN114399622A (en) Image processing method and related device
CN116527266A (en) Data aggregation method and related equipment
CN116343247B (en) Form image correction method, device and equipment
CN112712377A (en) Product defect arrangement and collection management database platform system
CN114527903A (en) Key mapping method, electronic equipment and system
CN116522400B (en) Image processing method and terminal equipment
CN113986406B (en) Method, device, electronic equipment and storage medium for generating doodle pattern
CN114942741B (en) Data transmission method and electronic equipment
CN117273687B (en) Card punching recommendation method and electronic equipment
CN117499797B (en) Image processing method and related equipment
CN116720533B (en) Code scanning method, electronic equipment and readable storage medium
CN117710695A (en) Image data processing method and electronic equipment
CN116028966A (en) Application display method, electronic equipment and storage medium
CN117008986A (en) Method for acquiring data, electronic device and computer readable storage medium
CN116821399A (en) Photo processing method and related equipment
CN118116002A (en) Model training method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant