CN107992780B - Code identification method and mobile terminal - Google Patents

Code identification method and mobile terminal Download PDF

Info

Publication number
CN107992780B
CN107992780B CN201711051640.5A CN201711051640A CN107992780B CN 107992780 B CN107992780 B CN 107992780B CN 201711051640 A CN201711051640 A CN 201711051640A CN 107992780 B CN107992780 B CN 107992780B
Authority
CN
China
Prior art keywords
image
images
sub
initial
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711051640.5A
Other languages
Chinese (zh)
Other versions
CN107992780A (en
Inventor
廖朝仲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201711051640.5A priority Critical patent/CN107992780B/en
Publication of CN107992780A publication Critical patent/CN107992780A/en
Application granted granted Critical
Publication of CN107992780B publication Critical patent/CN107992780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1465Methods for optical code recognition the method including quality enhancement steps using several successive scans of the optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1491Methods for optical code recognition the method including quality enhancement steps the method including a reconstruction step, e.g. stitching two pieces of bar code together to derive the full bar code

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Toxicology (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention discloses a code identification method and a mobile terminal, wherein the method comprises the following steps: scanning the same coded image for multiple times to obtain multiple initial images; synthesizing the plurality of initial images to obtain a target image; and identifying the target image to obtain an identification result. In the embodiment of the invention, the same coded image is scanned for multiple times to obtain a plurality of initial images for synthesis, so that a target image containing more complete codes is obtained, and the code recognition rate is improved.

Description

Code identification method and mobile terminal
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a code identification method and a mobile terminal.
Background
At present, more and more users obtain various network services by scanning encoded images such as one-dimensional codes and two-dimensional codes. For example, scanning a one-dimensional code on a good queries for good information, scanning a two-dimensional code on a check-out counter to pay a merchant, scanning a two-dimensional code using a sharing cart, and so forth.
In general, in order to make the encoded image clear and durable, the encoded image is usually printed on reflective paper, or the surface of the encoded image is covered with a material having strong light reflection, such as transparent plastic or glass. In the night or in a dark scene, the mobile phone may not be able to recognize a clear encoded image, so the user may turn on a flash of the mobile phone and scan the clear encoded image with the aid of a light source of the flash. However, light emitted by the flash lamp is reflected by reflective materials such as reflective paper, plastic or glass, so that a light spot is formed on the surface of the two-dimensional code, and the light spot may block part of the coded image, so that the code cannot be recognized. Fig. 1 is a schematic view of a scene of scanning a two-dimensional code with a light spot in the prior art. As can be seen from fig. 1, light emitted by a flashlight of a mobile phone forms a light spot on the surface of the two-dimensional code, and the light spot shields a part of the two-dimensional code, so that the two-dimensional code cannot be recognized.
Therefore, the conventional code recognition method has a problem that the code cannot be recognized.
Disclosure of Invention
The invention provides a code identification method and a mobile terminal, which aim to solve the problem that the existing code identification method cannot identify codes.
In order to solve the technical problem, the invention is realized as follows: the embodiment of the invention provides a code identification method, which comprises the following steps:
scanning the same coded image for multiple times to obtain multiple initial images;
synthesizing the plurality of initial images to obtain a target image;
and identifying the target image to obtain an identification result.
In a first aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes:
the scanning module is used for scanning the same coded image for multiple times to obtain a plurality of initial images;
the synthesis module is used for synthesizing the plurality of initial images to obtain a target image;
and the identification module is used for identifying the target image to obtain an identification result.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the code recognition method.
In a third aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the code identification method.
In the embodiment of the invention, the target image containing more complete codes is obtained by synthesizing a plurality of initial images obtained by scanning the same coded image for a plurality of times, so that the code recognition rate is improved.
Drawings
FIG. 1 is a schematic diagram of a prior art scene for scanning a two-dimensional code with a light spot;
FIG. 2 is a flowchart of a code recognition method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a code recognition method according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of various cutting methods provided in the second embodiment of the present invention;
FIG. 5 is a schematic diagram of image segmentation according to a second embodiment of the present invention;
fig. 6 is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 7 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example one
Fig. 2 is a flowchart of a code identification method according to an embodiment of the present invention, where the method is applied to a mobile terminal, and the method specifically includes the following steps:
step 110, scanning the same encoded image for multiple times to obtain multiple initial images.
The mobile terminal includes a mobile phone, a tablet computer, and other terminals. The mobile terminal can scan the same coded image through the camera, so that a plurality of initial images which identify the same code are acquired.
The encoded image may be an image including an encoding such as a one-dimensional code, a two-dimensional code, or a three-dimensional code. If the coded image has the light spot, a corresponding pixel missing area exists in the acquired initial image, and in the pixel missing area, part of the code is blocked by the light spot, so that the mobile terminal cannot scan the complete code from the initial image.
In a specific implementation, a user can submit a code identification instruction for the same coded image. When the code identification instruction is received, a camera of the mobile terminal can be called to scan the coded image so as to acquire an initial image of a single frame. If the pixel missing area exists on the initial image of the single frame, which indicates that light spots may exist in the same encoded image, multiple scans may be performed on the same encoded image, and images of consecutive multiple frames are acquired as the multiple initial images. In practical application, the same encoded image may be scanned first, and multiple consecutive frames of images are collected as the plurality of initial images, and if it is identified that the plurality of initial images all have missing pixel regions, it indicates that light spots may exist in the same encoded image.
And step 120, synthesizing the plurality of initial images to obtain a target image.
In a specific implementation, the plurality of initial images with the missing pixel regions may be synthesized to obtain a target image containing complete codes.
More specifically, one or more pixel-supplemented images capable of filling the pixel-deficient area may be extracted from other initial images for the pixel-deficient area in one of the initial images, and the two images are combined to obtain the target image. In specific implementation, continuous multi-frame images can be acquired for the same coded image to serve as a plurality of initial images, then the plurality of initial images are segmented, each initial image is segmented into a plurality of sub-images, position marks can be added to the sub-images according to the positions of the sub-images in the initial images, the position marks of the sub-images belonging to a pixel missing area in a certain initial image are used as the position marks of the sub-images, and the sub-images with the matched position marks are extracted from other initial images to serve as pixel missing images.
For example, the initial images P1 and P2 are currently available, the initial images P1 and P2 are respectively cut into four equal parts, the initial image P1 is then cut into four sub-images at the upper left position, lower left position, upper right position and lower right position, respectively labeled as sub-images P1_1, P1_2, P1_3 and P1_4, and the initial image P2 is then cut into four sub-images at the upper left position, lower left position, upper right position and lower right position, respectively labeled as sub-images P2_1, P2_2, P2_3 and P2_ 4. If it is recognized that the missing-pixel region exists in the sub-image P1_2 of the original image P1, the sub-image P2_2 in which the missing-pixel region does not exist is extracted from the original image P2 as a pixel-complemented image.
When a plurality of initial images are synthesized, for the above example, the sub-images P1_1, P1_3, and P1_4 may be extracted and merged with the sub-image P2_2 to obtain the target image including P1_1, P2_2, P1_3, and P1_4, or the sub-image P2_2 is used to replace the P1_2 in the initial image P1, and the initial image after replacing the sub-image is used as the target image. Those skilled in the art can adopt different synthesis manners according to actual situations to obtain a target image with complete coding, and the embodiment of the present invention does not limit the specific combination manner.
It should be added that the light spot on the encoded image is usually caused by the light emitted from the mobile terminal, and when the user scans the encoded image, the angle of the scanned image of the mobile terminal is usually fine-tuned to identify the relatively complete encoded image, and in the fine-tuning process, the angle of the light emitted from the mobile terminal is also changed correspondingly, so that the position of the light spot on the encoded image is also changed, and when the position of the light spot is changed, the shielded codes are different. Therefore, in a plurality of initial images acquired at successive times, the positions of the missing pixel regions in the respective initial images may be different from each other.
And step 130, identifying the target image to obtain an identification result.
In specific implementation, the target image can be coded and identified, and the target image contains a relatively complete code, so that the code can be identified to obtain a coded identification result. Those skilled in the art may adopt various encoding and recognition methods according to actual needs to obtain the recognition result of the target image, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, the target image containing more complete codes is obtained by synthesizing a plurality of initial images obtained by scanning the same coded image for a plurality of times, so that the code recognition rate is improved.
Example two
Fig. 3 is a flowchart of a code recognition method according to a second embodiment of the present invention, where the method is applied to a mobile terminal, and the method specifically includes the following steps:
step 210, scanning the same encoded image for multiple times to obtain multiple initial images.
Optionally, the step 210 may include the steps of:
and step 211, when receiving a code identification instruction submitted by a user for the coded image, calling a camera of the mobile terminal to scan the coded image to obtain a single-frame image.
In a specific implementation, when a user needs to identify a code, the user may submit a code identification instruction on the mobile terminal to trigger identification processing for the code. For example, the user turns on the scan two-dimensional code function on the WeChat.
When the mobile terminal receives the code identification instruction, the camera can be called to scan the code image, and a single-frame image is acquired.
Step 212, when a pixel missing region exists in the single-frame image, generating an angle adjustment prompt; the angle adjustment prompt is used for prompting the user to adjust the angle of the camera for scanning the coded image within preset time.
In a specific implementation, after a single frame image of a coded image is acquired, whether a pixel missing region exists in the single frame image can be identified. If the single frame image has the pixel missing area, the light spot exists on the coded image, so that an angle adjustment prompt can be generated and displayed on the mobile terminal to prompt a user to adjust the angle of the camera for scanning the coded image within a certain time, and the position of the light spot on the coded image is changed by adjusting the angle of the scanned coded image.
It should be noted that, in an actual application scenario, light spots of the encoded image are usually caused by light emitted by a flash of the camera, and adjusting an angle at which the camera scans the encoded image adjusts a position of the light emitted by the flash on the encoded image, so that the position of the light spots on the encoded image is also changed.
Step 213, detecting the light source intensity value; the mobile terminal comprises a light source intensity value, and the light source intensity value is used for controlling the light source intensity of the mobile terminal.
In step 214, if the light source intensity value is greater than the predetermined intensity threshold, the light source intensity value is decreased.
In addition to prompting the user to adjust the angle at which the camera captures the image, the intensity value of the light source may also be reduced, for example, the intensity of the light source of a flash on the mobile terminal. By reducing the intensity of the light source, the area of light spots on the coded image can be reduced, the degree of the light spots can be weakened, when the initial image is synthesized, a target image containing complete codes can be obtained by synthesizing a small number of initial images, and the speed of code identification is improved.
Step 215, calling the camera to scan for multiple times within the preset time to obtain multiple frames of images at continuous moments.
In a specific implementation, after the user is prompted to adjust the angle of the scanned coded image within a preset time, multiple frames of images at consecutive moments can be scanned within the preset time.
Step 216, using the multi-frame image as the plurality of initial images.
In a specific implementation, the plurality of initial images may be a plurality of images at consecutive times. In the process of scanning the coded image by the user, the angle of the image scanned by the camera is changed, the position of the light emitted by the flash lamp of the camera on the coded image is correspondingly changed, so that the position of the light spot on the coded image is also changed, and when the position of the light spot is changed, the shielded codes are different. Therefore, in a plurality of initial images acquired at successive times, the positions of the pixel missing regions in the respective initial images may be different from each other.
And step 220, synthesizing the plurality of initial images to obtain a target image.
Optionally, the step 220 may include:
step 221, when it is recognized that all of the plurality of initial images have pixel missing regions, extracting a plurality of pixel-complementing images from the plurality of initial images for the pixel missing regions in at least one initial image.
In specific implementation, continuous multi-frame images can be acquired for the same coded image to serve as a plurality of initial images, then the plurality of initial images are segmented, each initial image is segmented into a plurality of sub-images, position marks can be added to the sub-images according to the positions of the sub-images in the initial images, position marks are added to the sub-images according to the positions of pixel missing areas in a certain initial image, and the sub-images with the matched position marks are extracted from other initial images to serve as pixel missing images.
Step 222, merging the at least one initial image and the plurality of pixel-filled images to obtain the target image.
In a specific implementation, the initial image with the pixel missing region is merged with the extracted multiple pixel-filling images to obtain the target image.
Optionally, the step 221 may specifically include the following steps:
at least two initial images are extracted as a first image and a second image, respectively.
The first image and the second image are segmented to obtain N first sub-images and M second sub-images; wherein M is more than or equal to N and more than 1.
Identifying P target first sub-images containing pixel missing areas in the N first sub-images; wherein, N is more than P and is more than or equal to 1.
And extracting P second sub-images which do not contain pixel missing areas and are matched with the P target first sub-images from the M second sub-images to serve as P pixel missing images.
In a specific implementation, two or more of the plurality of initial images may be extracted, one of the initial images may be selected as the first image, and one or more of the initial images may be selected as the second image. For example, for a plurality of frames of images acquired at consecutive times, a first frame of image is taken as a first image, a subsequent plurality of frames of images are taken as a second image, and the first image and the second image are segmented to obtain N first sub-images and M second sub-images. Whether the N first sub-images contain the pixel missing region or not can be respectively identified, so as to obtain P target first sub-images containing the pixel missing region. Then, it can be identified whether M second sub-images contain pixel missing regions or not, if a certain second sub-image does not contain a pixel missing region, and the second sub-image is matched with any one of the target first sub-images, and extracted as a pixel filling image until P pixel filling images are obtained.
Optionally, the mobile terminal is preset with a plurality of area ratios and a plurality of splitting numbers respectively corresponding to the area ratios, and before the step of splitting the first image and the second image to obtain N first sub-images and M second sub-images, the method may further include:
calculating a target area ratio of a pixel missing region of the first image in the first image;
and searching the target segmentation quantity N corresponding to the target area ratio.
In a specific implementation, the area ratio of the pixel missing region in the first image may be calculated, and the segmentation quantity may be determined according to the area ratio. For example, the area ratio of the pixel missing region in the image is 12%, and the corresponding segmentation number N can be found to be 4, which indicates that the image can be currently segmented into 4.
In practical application, if the area ratio of the pixel missing region is small, a relatively small segmentation number N can be set to segment the image into a small number of sub-images, and if the area ratio of the pixel missing region is large, a relatively large segmentation number N can be set to segment the image into a plurality of sub-images as small as possible. For example, the area ratio of the pixel missing region in the image is 10%, the corresponding number of segmentations is set to 2, the area ratio of the pixel missing region in the image is 20%, and the corresponding number of segmentations is set to 4. Of course, a person skilled in the art may set the area ratio of different pixel missing regions and the corresponding segmentation number according to actual situations, which is not limited in the embodiment of the present invention.
In practical applications, the second image may have a plurality of second images, and therefore, the step of segmenting the first image and the second image to obtain the N first sub-images and the M second sub-images may be further configured to segment the first image into the N first sub-images, and segment the plurality of second images into the N second sub-images respectively to obtain the M second sub-images. For example, each initial image is sliced into 4, the first image is sliced into 4 first sub-images, and the 4 second images are sliced into 16 second sub-images.
It should be added that, the manner of segmenting the image may be various, and fig. 4 is a schematic diagram of various segmenting manners provided by the second embodiment of the present invention. As can be seen from fig. 4, when the image is sliced, the image may be sliced into four equal parts based on the horizontal slice and the vertical tangent of the center of the image, or the image may be sliced into four equal parts based on the cross tangent, or the image may be sliced according to three equal parts. In practical applications, a person skilled in the art may set different ways of segmenting the image for specific sizes and positions of the spots.
Optionally, the N first sub-images and the M second sub-images each have an image identifier, and the step of extracting, from the M second sub-images, P second sub-images that do not include a pixel missing region and are matched with the P target first sub-images as P pixel-filling images may specifically include:
selecting candidate second sub-images from the M second sub-images;
if the candidate second sub-image does not contain the pixel missing area and the image identifier of the candidate second sub-image is matched with the image identifier of at least one target first sub-image, extracting the candidate second sub-image as the pixel missing image;
and if the candidate second sub-images contain pixel missing areas or the image identifications of the candidate second sub-images are not matched with the image identifications of the P target first sub-images, returning to the step of selecting the candidate second sub-images from the M second sub-images until P pixel filling-up images are obtained.
In a specific implementation, one of the second sub-images may be selected as a candidate second sub-image, and whether the candidate second sub-image includes a pixel missing region is determined, if not, whether the image identifier of the candidate second sub-image matches the image identifier of any one of the target first sub-images may be further determined, and if the image identifier of the candidate second sub-image matches the image identifier of any one of the target first sub-images, the candidate second sub-image may be extracted as a pixel missing image. If the candidate second sub-image contains the pixel missing area, or the image identifier of the candidate second sub-image is not matched with the image identifiers of all the target first sub-images, the candidate second sub-image cannot fill the pixel missing area of the first image, the next second sub-image is continuously selected, the pixel missing area of the next second sub-image is identified, the image identifiers of the second sub-image are matched, and the P pixel filling images are obtained.
To facilitate the understanding of the embodiment of the present invention by those skilled in the art, fig. 5 is a schematic diagram of image segmentation provided in the second embodiment of the present invention. As can be seen from fig. 5, for the currently acquired 9 frames of images P1, P2, P3.. P9, they are respectively cut into four equal parts, for each of the sub-images cut out, the image identifier of one sub-image is marked according to the position where it is located, for example, according to the upper left position, the lower left position, the upper right position and the lower right position, 4 sub-images of the image P1 are respectively marked as sub-images P1_1, P1_2, P1_3 and P1_4, 4 sub-images of the image P2 are respectively marked as sub-images P2_1, P2_2, P2_3 and P2_4, and so on, 4 sub-images of the image P9 are respectively marked as sub-images P9_1, P9_2, P9_3 and P9_ 4. Selecting an image P1 as a first image, and identifying that the first sub-images P1_1, P1_2 and P1_3 of the first sub-images P1_1, P1_2, P1_3 and P1_4 of the image P1 include pixel missing areas to obtain 3 target first sub-images, wherein the image identifiers of the 3 target first sub-images indicate that 3 second sub-images of an upper left position, a lower left position and an upper right position need to be searched currently to fill the pixel missing areas. Selecting the images P2 to P9 as second images, searching P second sub-images which do not contain pixel missing areas and are matched with the images P1_1, P1_2 and P1_3 aiming at a plurality of second sub-images of the images P2 to P9, and obtaining 3 pixel filling images P6_1, P9_2, P5_3 and the like.
Optionally, the step 222 may specifically include the following steps:
replacing the P pixel-filled images with P target first sub-images in the first image;
replacing the P target first sub-images with first images of the P pixel-filling images as the target images.
In specific implementation, P pixel-filled images can fill pixel-missing areas in P target first sub-images in the first image, so that P pixel-filled images can be used to replace P target first sub-images in the first image, thereby obtaining the target image. In practical application, the first sub-image not including the pixel missing area may be extracted from the first image, and the extracted first sub-image not including the pixel missing area is merged with the P pixel-filled images to obtain the target image.
For the example of the image segmentation schematic shown in fig. 5, 3 pixel-filling images, such as P6_1, P9_2, and P5_3, may be respectively substituted for the 3 target first sub-images P1_1, P1_2, and P1_3 containing the pixel-missing region in the image P1, so as to obtain target images composed of P6_1, P9_2, P5_3, and P1_ 4. As can be seen from FIG. 5, after the sub-images P6_1, P9_2, P5_3 and P1_4 are merged, the resulting target image contains the complete code.
And step 230, identifying the target image to obtain an identification result.
The target image can be coded and identified to obtain a coded identification result. Because the target image contains more complete codes, the codes can be identified to obtain an identification result.
EXAMPLE III
Fig. 6 is a block diagram of a mobile terminal according to a third embodiment of the present invention, where the mobile terminal 300 may specifically include the following modules:
the scanning module 310 is configured to scan the same encoded image multiple times to obtain multiple initial images.
And a synthesizing module 320, configured to synthesize the multiple initial images to obtain a target image.
The identification module 330 is configured to identify the target image to obtain an identification result.
Optionally, the synthesis module 320 includes:
a pixel-filling image extracting sub-module 321, configured to, when it is identified that all of the plurality of initial images have pixel-missing regions, extract, for a pixel-missing region in at least one of the initial images, a plurality of pixel-filling images from the plurality of initial images;
an image merging submodule 322, configured to merge the at least one initial image and the plurality of pixel-filled images to obtain the target image.
Optionally, the pixel-filling image extracting sub-module 321 includes:
an image extraction unit, configured to extract at least two of the initial images as a first image and a second image, respectively;
the image segmentation unit is used for segmenting the first image and the second image to obtain N first sub-images and M second sub-images; wherein M is more than or equal to N and more than 1;
a target first sub-image identifying unit, configured to identify, in the N first sub-images, P target first sub-images including a pixel missing region; wherein N is more than P and is more than or equal to 1;
and the pixel filling-in image extracting unit is used for extracting P second sub-images which do not contain pixel missing areas and are matched with the P target first sub-images from the M second sub-images to be used as P pixel filling-in images.
Optionally, the N first sub-images and the M second sub-images each have an image identifier, and the pixel-filling-up image extracting unit includes:
a candidate second sub-image selecting sub-unit, configured to select candidate second sub-images from the M second sub-images;
a pixel filling-in image extracting subunit, configured to extract the candidate second sub-image as the pixel filling-in image if the candidate second sub-image does not include a pixel missing area and an image identifier of the candidate second sub-image matches an image identifier of at least one target first sub-image;
and a returning subunit, configured to, if the candidate second sub-image includes a pixel missing region or the image identifier of the candidate second sub-image is not matched with the image identifiers of the P target first sub-images, return to the step of selecting the candidate second sub-image from the M second sub-images until P pixel-complemented images are obtained.
Optionally, the image merging sub-module 322 includes:
an image replacement unit, configured to replace the P pixel-filled images with P target first sub-images in the first image;
a target image generation unit configured to replace the P target first sub-images with first images of the P pixel-filled images as the target image.
Optionally, the mobile terminal is preset with a plurality of area ratios and a plurality of segmentation quantities respectively corresponding to the area ratios, and the pixel filling-up image extraction sub-module 321 further includes:
a target area ratio unit that calculates a target area ratio of a pixel missing region of the first image in the first image;
and the target segmentation quantity searching unit is used for searching the target segmentation quantity N corresponding to the target area ratio.
Optionally, the scanning module 310 includes:
the single-frame image scanning submodule 311 is configured to, when receiving a code identification instruction submitted by a user for the coded image, invoke a camera of the mobile terminal to scan the coded image to obtain a single-frame image;
an angle adjustment prompt generation sub-module 312, configured to generate an angle adjustment prompt when it is identified that a pixel missing region exists in the single frame image; the angle adjustment prompt is used for prompting the user to adjust the angle of the camera for scanning the coded image within preset time;
the multi-frame image scanning submodule 313 is used for calling the camera to scan for multiple times within the preset time to obtain multi-frame images at continuous moments;
an initial image generation sub-module 314 configured to use the plurality of frames of images as the plurality of initial images.
Optionally, the mobile terminal 300 includes a light source intensity value, and the light source intensity value is used to control the light source intensity of the mobile terminal, and the scanning module 310 includes:
a light source intensity value detection submodule 315 configured to detect the light source intensity value;
a light source intensity value reduction submodule 316, configured to reduce the light source intensity value if the light source intensity value is greater than a preset intensity threshold value.
The mobile terminal provided in the third embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 2 and fig. 3, and is not described herein again to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the mobile terminal 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 7 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 410 is configured to scan the same encoded image multiple times to obtain multiple initial images; synthesizing the plurality of initial images to obtain a target image; and identifying the target image to obtain an identification result.
In the embodiment of the invention, a plurality of initial images are obtained by scanning the same coded image for a plurality of times, and a plurality of initial images are synthesized to obtain the target image, wherein the obtained target image contains relatively complete codes, and the codes can still be identified under the condition that the coded image has light spots for shielding partial codes, so that the problem that the codes cannot be identified by the conventional code identification method is solved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and transmitting signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 402, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the mobile terminal 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The mobile terminal 400 also includes at least one sensor 405, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or a backlight when the mobile terminal 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 7, the touch panel 4071 and the display panel 4061 are two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 408 is an interface through which an external device is connected to the mobile terminal 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 400 or may be used to transmit data between the mobile terminal 400 and external devices.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby integrally monitoring the mobile terminal. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The mobile terminal 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the mobile terminal 400 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the above-mentioned code identification method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor 410, the computer program implements the processes of the above-mentioned embodiment of the code identification method, and can achieve the same technical effect, and in order to avoid repetition, the description of the process is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that many more modifications and variations can be made without departing from the spirit of the invention and the scope of the appended claims.

Claims (8)

1. A code identification method is applied to a mobile terminal, and is characterized by comprising the following steps:
scanning the same coded image for multiple times to obtain multiple initial images;
synthesizing the plurality of initial images to obtain a target image;
identifying the target image to obtain an identification result;
the step of synthesizing the plurality of initial images to obtain the target image includes:
when the plurality of initial images are identified to have pixel missing regions, extracting a plurality of pixel-supplemented images from the plurality of initial images aiming at the pixel missing regions in at least one initial image;
combining the at least one initial image and the plurality of pixel filling images to obtain the target image;
when it is recognized that the plurality of initial images all have the pixel missing region, the step of extracting the plurality of pixel-complementing images from the plurality of initial images for the pixel missing region in at least one initial image includes:
segmenting the plurality of initial images to obtain a plurality of initial sub-images; the image segmentation mode is determined according to the size and the position of the light spot;
adding a position identifier to the initial sub-image aiming at the position of the initial sub-image in the initial image;
extracting initial sub-images with matched position identifications from the plurality of initial images as pixel filling-in images aiming at the position identifications of the pixel missing regions in at least one initial image;
the scanning the same coded image for multiple times to obtain multiple initial images includes:
when a code identification instruction submitted by a user aiming at the coded image is received, a camera of the mobile terminal is called to scan the coded image to obtain a single-frame image;
when the single-frame image is identified to have a pixel missing area, generating an angle adjustment prompt; the angle adjustment prompt is used for prompting the user to adjust the angle of the camera for scanning the coded image within preset time.
2. The method according to claim 1, wherein the step of extracting a plurality of pixel-filled images from the plurality of initial images for the pixel-missing region in at least one of the initial images comprises:
extracting at least two initial images as a first image and a second image respectively;
the first image and the second image are segmented to obtain N first sub-images and M second sub-images; wherein M is more than or equal to N and more than 1;
identifying P target first sub-images containing pixel missing areas in the N first sub-images; wherein N is more than P and is more than or equal to 1;
and extracting P second sub-images which do not contain pixel missing areas and are matched with the P target first sub-images from the M second sub-images to serve as P pixel missing images.
3. The method according to claim 2, wherein the N first sub-images and the M second sub-images each have an image identifier, and the step of extracting, from the M second sub-images, P second sub-images that do not include a missing pixel region and match the P target first sub-images as P pixel-filled images comprises:
selecting candidate second sub-images from the M second sub-images;
if the candidate second sub-image does not contain the pixel missing area and the image identifier of the candidate second sub-image is matched with the image identifier of at least one target first sub-image, extracting the candidate second sub-image as the pixel missing image;
and if the candidate second sub-images contain pixel missing areas or the image identifications of the candidate second sub-images are not matched with the image identifications of the P target first sub-images, returning to the step of selecting the candidate second sub-images from the M second sub-images until P pixel filling-up images are obtained.
4. A mobile terminal, characterized in that the mobile terminal comprises:
the scanning module is used for scanning the same coded image for multiple times to obtain a plurality of initial images;
the synthesis module is used for synthesizing the plurality of initial images to obtain a target image;
the identification module is used for identifying the target image to obtain an identification result;
the synthesis module comprises:
the pixel filling-in image extraction submodule is used for extracting a plurality of pixel filling-in images from the plurality of initial images aiming at the pixel missing region in at least one initial image when the plurality of initial images are identified to have the pixel missing region;
the image merging submodule is used for merging the at least one initial image and the plurality of pixel filling images to obtain the target image;
the pixel filling-in image extraction submodule is also used for segmenting the plurality of initial images to obtain a plurality of initial sub-images; the image segmentation mode is determined according to the size and the position of the light spot; adding a position identifier to the initial sub-image aiming at the position of the initial sub-image in the initial image; extracting initial sub-images with matched position identifications from the plurality of initial images as pixel filling-in images aiming at the position identifications of the pixel missing regions in at least one initial image;
the scanning module is further used for calling a camera of the mobile terminal to scan the coded image to obtain a single-frame image when receiving a code identification instruction submitted by a user aiming at the coded image; when the single-frame image is identified to have a pixel missing area, generating an angle adjustment prompt; the angle adjustment prompt is used for prompting the user to adjust the angle of the camera for scanning the coded image within preset time.
5. The mobile terminal of claim 4, wherein the pixel-filled image extraction sub-module comprises:
an image extraction unit, configured to extract at least two of the initial images as a first image and a second image, respectively;
the image segmentation unit is used for segmenting the first image and the second image to obtain N first sub-images and M second sub-images; wherein M is more than or equal to N and more than 1;
a target first sub-image identifying unit, configured to identify, in the N first sub-images, P target first sub-images including a pixel missing region; wherein N is more than P and is more than or equal to 1;
and the pixel filling-in image extracting unit is used for extracting P second sub-images which do not contain pixel missing areas and are matched with the P target first sub-images from the M second sub-images to serve as P pixel filling-in images.
6. The mobile terminal according to claim 5, wherein the N first sub-images and the M second sub-images each have an image identifier, and the pixel-filling-up image extracting unit comprises:
a candidate second sub-image selecting sub-unit, configured to select candidate second sub-images from the M second sub-images;
a pixel filling-in image extracting subunit, configured to extract the candidate second sub-image as the pixel filling-in image if the candidate second sub-image does not include a pixel missing area and an image identifier of the candidate second sub-image matches an image identifier of at least one target first sub-image;
and a returning subunit, configured to, if the candidate second sub-image includes a pixel missing region or the image identifier of the candidate second sub-image is not matched with the image identifiers of the P target first sub-images, return to the step of selecting the candidate second sub-image from the M second sub-images until P pixel-complemented images are obtained.
7. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the code recognition method according to any one of claims 1 to 3.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of a code recognition method according to any one of claims 1 to 3.
CN201711051640.5A 2017-10-31 2017-10-31 Code identification method and mobile terminal Active CN107992780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711051640.5A CN107992780B (en) 2017-10-31 2017-10-31 Code identification method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711051640.5A CN107992780B (en) 2017-10-31 2017-10-31 Code identification method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107992780A CN107992780A (en) 2018-05-04
CN107992780B true CN107992780B (en) 2021-05-28

Family

ID=62030071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711051640.5A Active CN107992780B (en) 2017-10-31 2017-10-31 Code identification method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107992780B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108563972B (en) * 2018-03-09 2021-11-16 Oppo广东移动通信有限公司 Graphic code identification method and device, mobile terminal and storage medium
CN109492451B (en) * 2018-10-30 2022-08-16 维沃移动通信有限公司 Coded image identification method and mobile terminal
CN110008781B (en) * 2019-04-19 2022-06-03 重庆三峡学院 Two-dimensional multi-frame modulation and demodulation method
CN111213366B (en) * 2019-05-23 2021-07-13 深圳市瑞立视多媒体科技有限公司 Rigid body identification method, device and system and terminal equipment
CN110674660B (en) * 2019-09-26 2021-11-05 珠海格力电器股份有限公司 Method and device for determining graphic code information, terminal equipment and household appliance
CN112766012B (en) * 2021-02-05 2021-12-17 腾讯科技(深圳)有限公司 Two-dimensional code image recognition method and device, electronic equipment and storage medium
CN113361458A (en) * 2021-06-29 2021-09-07 北京百度网讯科技有限公司 Target object identification method and device based on video, vehicle and road side equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101147157A (en) * 2005-01-26 2008-03-19 数字逻辑扫描公司 Data reader and methods for imaging targets subject to specular reflection
CN105631828A (en) * 2015-12-29 2016-06-01 华为技术有限公司 Image processing method and device
CN107145810A (en) * 2017-04-26 2017-09-08 南京理工大学 A kind of comprehensive bar code identifying device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101147157A (en) * 2005-01-26 2008-03-19 数字逻辑扫描公司 Data reader and methods for imaging targets subject to specular reflection
CN105631828A (en) * 2015-12-29 2016-06-01 华为技术有限公司 Image processing method and device
CN107145810A (en) * 2017-04-26 2017-09-08 南京理工大学 A kind of comprehensive bar code identifying device and method

Also Published As

Publication number Publication date
CN107992780A (en) 2018-05-04

Similar Documents

Publication Publication Date Title
CN107992780B (en) Code identification method and mobile terminal
CN110443330B (en) Code scanning method and device, mobile terminal and storage medium
CN110674662B (en) Scanning method and terminal equipment
CN107977652B (en) Method for extracting screen display content and mobile terminal
CN110113528B (en) Parameter obtaining method and terminal equipment
CN107846583B (en) Image shadow compensation method and mobile terminal
CN109495616B (en) Photographing method and terminal equipment
CN109388456B (en) Head portrait selection method and mobile terminal
CN108763998B (en) Bar code identification method and terminal equipment
CN109257505B (en) Screen control method and mobile terminal
CN108460817B (en) Jigsaw puzzle method and mobile terminal
CN107704182B (en) Code scanning method and mobile terminal
CN107734172B (en) Information display method and mobile terminal
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN110519503B (en) Method for acquiring scanned image and mobile terminal
CN108174109B (en) Photographing method and mobile terminal
CN110209324B (en) Display method and terminal equipment
CN108259756B (en) Image shooting method and mobile terminal
CN111031178A (en) Video stream clipping method and electronic equipment
CN109639981B (en) Image shooting method and mobile terminal
CN111007980A (en) Information input method and terminal equipment
CN109819331B (en) Video call method, device and mobile terminal
CN108304744B (en) Scanning frame position determining method and mobile terminal
CN111432122A (en) Image processing method and electronic equipment
CN108063884B (en) Image processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant