CN112560840B - Method for identifying multiple identification areas, identification terminal, and readable storage medium - Google Patents

Method for identifying multiple identification areas, identification terminal, and readable storage medium Download PDF

Info

Publication number
CN112560840B
CN112560840B CN202011419732.6A CN202011419732A CN112560840B CN 112560840 B CN112560840 B CN 112560840B CN 202011419732 A CN202011419732 A CN 202011419732A CN 112560840 B CN112560840 B CN 112560840B
Authority
CN
China
Prior art keywords
identification
scene
scanned image
area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011419732.6A
Other languages
Chinese (zh)
Other versions
CN112560840A (en
Inventor
王林祥
赵皎平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Irain IoT Technology Service Co Ltd
Original Assignee
Xian Irain IoT Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Irain IoT Technology Service Co Ltd filed Critical Xian Irain IoT Technology Service Co Ltd
Priority to CN202011419732.6A priority Critical patent/CN112560840B/en
Publication of CN112560840A publication Critical patent/CN112560840A/en
Application granted granted Critical
Publication of CN112560840B publication Critical patent/CN112560840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a method for identifying a plurality of identification areas, which comprises the steps of acquiring a scanning image from an acquisition end; determining each key identification area of the scanned image, and acquiring each identification frame corresponding to each key identification area; and identifying each target feature in each identification frame of the scanned image, analyzing to obtain each target feature information of each key identification area, and sending the target feature information to an information acquisition end. The invention also discloses an identification terminal and a computer readable storage medium. The invention realizes that a plurality of characteristic information can be obtained by identifying a plurality of identification areas by one scanning.

Description

Method for identifying multiple identification areas, identification terminal, and readable storage medium
The application is a divisional application with the application number of 201811099817.3, and the application date of the main application is 2018, 9, 20; the invention of the parent case is named as follows: a plurality of identification area identification methods, an identification terminal and a readable storage medium.
Technical Field
The present invention relates to the field of image scanning and recognition, and in particular, to a method for recognizing multiple recognition areas, a recognition terminal, and a readable storage medium.
Background
In the existing scanning identification technology, in order to better identify a target object or a graphic code, a scanning area is usually limited during scanning, and when the target object or the graphic code to be scanned falls into the limited area, the identification rate is higher, and a piece of conforming characteristic information can be identified according to the limited scanning area; however, in some special fields, two or more identification information needs to be acquired in one scan, so that two or more target objects or graphic codes need to be accurately identified in one scan; however, only one feature information can be recognized in a single limited scanning area, and two or more pieces of recognition information cannot be recognized.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a multiple identification area identification method, an identification terminal and a readable storage medium, which aim to solve the technical problem that a single limited scanning area cannot identify two or more identification information.
In order to achieve the above object, the present invention provides a plurality of identification area identification methods including the steps of:
acquiring a scanning image from an acquisition end;
determining each key identification area of the scanned image, and acquiring each identification frame corresponding to each key identification area;
and identifying each target feature in each identification frame of the scanned image, analyzing to obtain each target feature information of each key identification area, and sending the target feature information to an information acquisition end.
Optionally, the step of determining each key identification area of the scanned image and acquiring each identification frame corresponding to each key identification area includes:
determining each key identification area of the scanned image according to a preset default scene, and acquiring each identification frame corresponding to each key identification area; or (b)
Performing image scene analysis on the acquired scanned image, and determining the identification scene type to which the image scene of the scanned image belongs;
and determining each key identification area of the scanned image according to the identification scene type, and acquiring each identification frame corresponding to each key identification area.
Optionally, the step of performing image scene analysis on the acquired scanned image, and determining the identification scene type to which the image scene of the scanned image belongs includes:
performing image scene analysis on the acquired scanning image to determine scene-related objects in the scanning image;
comparing the scene associated object with standard objects in each preset identification scene to obtain a first comparison result;
acquiring a target identification scene corresponding to the scanning image from a preset identification scene according to a first comparison result;
and determining the identification scene type to which the image scene of the scanned image belongs according to the target identification scene.
Optionally, the step of determining each key identification area of the scanned image according to the identification scene type and acquiring each identification frame corresponding to each key identification area includes:
determining each preset identification area according to the identification scene type of the image scene of the scanned image;
According to each preset identification area, correspondingly determining the specific position of each key identification area of the scanned image;
and generating each identification frame of the scanning image at the periphery of each key identification area according to the specific position of each key identification area.
Optionally, the step of determining each key identification area of the scanned image according to the identification scene type and acquiring each identification frame corresponding to each key identification area includes:
reading each identification area recorded and stored for a plurality of times according to the identification scene type of the image scene of the scanned image;
according to each identification area which is stored and recorded for a plurality of times, correspondingly determining the specific position of each key identification area of the scanned image;
and generating each identification frame of the scanning image at the periphery of each key identification area according to the specific position of each key identification area.
Optionally, the step of identifying each target feature in each identification frame of the scanned image, analyzing each target feature information of each key identification area, and sending the target feature information to the information obtaining end includes:
identifying each target feature within each identification frame of the scanned image;
According to the identified target features, respectively analyzing target feature information contained in the target features;
obtaining the target characteristic information of each key identification area of the scanned image;
and acquiring the characteristic information of each target of the scanned image, and sending the characteristic information to an information acquisition end.
Optionally, the step of analyzing the target feature information included in each target feature according to the identified target features further includes:
acquiring each target feature which cannot be analyzed to obtain the target feature information, and acquiring each key identification area corresponding to the feature information which cannot be obtained;
acquiring each key identification area on the scanned image, wherein the key identification areas cannot acquire characteristic information; amplifying each key identification area in which characteristic information cannot be acquired;
amplifying corresponding to each identification frame according to the amplification condition of each key identification area incapable of acquiring the characteristic information;
and identifying each target feature in each amplified identification frame again, and analyzing each target feature information contained in each target feature.
Optionally, after the step of determining each key identification area of the scanned image and obtaining each identification frame corresponding to each key identification area, the method further includes:
Acquiring a first identification frame which is manually deleted by a user for the scanned image;
acquiring a second identification frame which is manually added to the scanned image by a user;
and removing the first identification frame and adding the second identification frame in the acquired identification frames of the scanned image to obtain the final identification frames of the scanned image.
In addition, to achieve the above object, the present invention also provides an identification terminal including: the system comprises a memory, a processor and a plurality of identification area identifying programs stored on the memory and capable of running on the processor, wherein the steps of the plurality of identification area identifying methods are realized when the plurality of identification area identifying programs are executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a plurality of identification area identifying programs which, when executed by a processor, implement the steps of the plurality of identification area identifying methods as set forth in any one of the above.
According to the method for identifying the multiple identification areas, provided by the embodiment of the invention, after the scanning image is acquired by the acquisition end, the image scene analysis is carried out on the scanning image to determine the type of the identification scene, so that each key identification area of the scanning image and each identification frame for identifying the characteristics of each key identification area are determined, and each key identification area can be increased or decreased according to the manual selection of a user; identifying and analyzing the characteristics in each identification frame according to each key identification area to obtain each characteristic information contained in each key identification area; the method realizes that a plurality of characteristic information can be obtained by identifying a plurality of identification areas through one-time scanning, and the identification can be carried out after the heavy point identification areas are amplified and displayed, so that the information identification accuracy is further improved.
Drawings
FIG. 1 is a schematic diagram of an identification terminal structure of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of a method for identifying a plurality of identification areas according to the present invention;
FIG. 3 is a schematic diagram of an identification frame containing relationships;
FIG. 4 is a schematic diagram of a recognition frame in a side-by-side relationship;
FIG. 5 is a schematic diagram of a relationship between a key recognition area and a recognition frame;
fig. 6 is a schematic diagram of the refinement flow of step S21;
fig. 7 is a schematic diagram of the refinement flow of step S22;
fig. 8 is a flowchart of a fourth embodiment of a method for identifying multiple identification areas according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The main solutions of the embodiments of the present invention are: acquiring a scanning image from an acquisition end; determining each key identification area of the scanned image, and acquiring each identification frame corresponding to each key identification area; and identifying each target feature in each identification frame of the scanned image, analyzing to obtain each target feature information of each key identification area, and sending the target feature information to an information acquisition end.
In the prior art, in order to achieve higher recognition rate, a scanning area is often limited when a target object or a graphic code is recognized, and when the target object or the graphic code to be scanned falls into the limited area, a piece of conforming characteristic information can be recognized according to the limited scanning area; however, only one piece of feature information can be recognized by defining one scanning area, and when a plurality of pieces of feature information need to be recognized simultaneously in one scanning, a plurality of pieces of feature information cannot be recognized by defining one scanning area.
The invention provides a solution, which can simultaneously identify a plurality of characteristic information in one scanning, and effectively solves the problem that a single limited scanning area cannot identify two or more identification information; and the heavy point identification area can be identified after being amplified and displayed, so that the information identification accuracy is further improved.
As shown in fig. 1, fig. 1 is a schematic diagram of an identification terminal structure of a hardware running environment according to an embodiment of the present invention.
The identification terminal of the embodiment of the invention can be a PC, or can be a mobile terminal device with a display function, such as a smart phone, a tablet personal computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio layer 3) player, a portable computer and the like.
As shown in fig. 1, the identification terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the identification terminal structure shown in fig. 1 does not constitute a limitation of the identification terminal, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a plurality of identification area identification programs may be included in a memory 1005 as one type of computer storage medium.
In the identification terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call a plurality of identification area identification programs stored in the memory 1005 and perform the following operations:
acquiring a scanning image from an acquisition end;
determining each key identification area of the scanned image, and acquiring each identification frame corresponding to each key identification area;
and identifying each target feature in each identification frame of the scanned image, analyzing to obtain each target feature information of each key identification area, and sending the target feature information to an information acquisition end.
Further, the step of determining each key identification area of the scanned image and acquiring each identification frame corresponding to each key identification area includes:
determining each key identification area of the scanned image according to a preset default scene, and acquiring each identification frame corresponding to each key identification area; or (b)
Performing image scene analysis on the acquired scanned image, and determining the identification scene type to which the image scene of the scanned image belongs;
And determining each key identification area of the scanned image according to the identification scene type, and acquiring each identification frame corresponding to each key identification area.
Further, the step of performing image scene analysis on the acquired scanned image, and determining the identification scene type to which the image scene of the scanned image belongs includes:
performing image scene analysis on the acquired scanning image to determine scene-related objects in the scanning image;
comparing the scene associated object with standard objects in each preset identification scene to obtain a first comparison result;
acquiring a target identification scene corresponding to the scanning image from a preset identification scene according to a first comparison result;
and determining the identification scene type to which the image scene of the scanned image belongs according to the target identification scene.
Further, the step of determining each key identification area of the scanned image according to the identification scene type and acquiring each identification frame corresponding to each key identification area includes:
determining each preset identification area according to the identification scene type of the image scene of the scanned image;
according to each preset identification area, correspondingly determining the specific position of each key identification area of the scanned image;
And generating each identification frame of the scanning image at the periphery of each key identification area according to the specific position of each key identification area.
Further, the step of determining each key identification area of the scanned image according to the identification scene type and acquiring each identification frame corresponding to each key identification area includes:
reading each identification area recorded and stored for a plurality of times according to the identification scene type of the image scene of the scanned image;
according to each identification area which is stored and recorded for a plurality of times, correspondingly determining the specific position of each key identification area of the scanned image;
and generating each identification frame of the scanning image at the periphery of each key identification area according to the specific position of each key identification area.
Further, the step of identifying each target feature in each identification frame of the scanned image, analyzing each target feature information of each key identification area, and sending the target feature information to the information acquisition end includes:
identifying each target feature within each identification frame of the scanned image;
according to the identified target features, respectively analyzing target feature information contained in the target features;
Obtaining the target characteristic information of each key identification area of the scanned image;
and acquiring the characteristic information of each target of the scanned image, and sending the characteristic information to an information acquisition end.
Further, after the step of analyzing the respective target feature information included in the respective target features according to the identified respective target features, the processor 1001 may call a plurality of identification area identifying programs stored in the memory 1005, and further perform the following operations:
acquiring each target feature which cannot be analyzed to obtain the target feature information, and acquiring each key identification area corresponding to the feature information which cannot be obtained;
acquiring each key identification area on the scanned image, wherein the key identification areas cannot acquire characteristic information; amplifying each key identification area in which characteristic information cannot be acquired;
amplifying corresponding to each identification frame according to the amplification condition of each key identification area incapable of acquiring the characteristic information;
and identifying each target feature in each amplified identification frame again, and analyzing each target feature information contained in each target feature.
Further, after the step of determining each key identification region of the scanned image and acquiring each identification frame corresponding to each key identification region, the processor 1001 may call a plurality of identification region identification programs stored in the memory 1005, and further perform the following operations:
Acquiring a first identification frame which is manually deleted by a user for the scanned image;
acquiring a second identification frame which is manually added to the scanned image by a user;
and removing the first identification frame and adding the second identification frame in the acquired identification frames of the scanned image to obtain the final identification frames of the scanned image.
Based on the above hardware structure, the method embodiment of the present invention is presented.
Referring to fig. 2, in a first embodiment of the multiple recognition area recognition method of the present invention, the multiple recognition area recognition method includes:
step S10, acquiring a scanning image from an acquisition end;
the execution subject of the method of this embodiment may be an identification terminal, which may be carried in a network server.
The identification terminal can acquire a scanning image from the acquisition terminal, and the acquisition terminal can acquire the scanning image through devices or modules such as a camera and the like and send the scanning image to the identification terminal; the scanned image is obtained by a target object which needs to be identified in a plurality of identification areas, for example, the target object can be any object which needs to identify a plurality of characteristic information, such as an automobile, a printer, a projection device and the like, and the target object needs to contain identifiable characteristic information.
Step S20, determining each key identification area of the scanned image, and acquiring each identification frame corresponding to each key identification area;
the identification terminal firstly determines each key identification area of the scanned image, and then generates corresponding identification frames according to each key identification area; each key identification area refers to an area containing characteristic information on a target object, and the identification terminal can obtain the required characteristic information according to the identification of the key identification area; each identification frame is generated around each key identification area according to the range of each key identification area, and the identification frames are used for limiting the range to be identified, so that the range of the key identification area completely falls into the identification range limited by the identification frames; as shown in fig. 3 and fig. 4, each recognition scene type has a plurality of corresponding recognition frames, and the respective recognition frames of each recognition scene can be in a contained or parallel relationship; wherein, fig. 3 is a recognition frame containing a relation, 1 is a scanning area of a scanned image, 2 is a recognition frame, and two recognition frames containing a relation are all contained in one recognition frame; fig. 4 shows two recognition frames in parallel relationship, 1 is a scanning area of a scanned image, 2 is a recognition frame, and the two recognition frames in parallel relationship may be a part which is not overlapped at all, or may be two recognition frames which are partially overlapped but not overlapped at all, or may have more or less recognition frames than the figure in each recognition scene.
Step S30, identifying each target feature in each identification frame of the scanned image, analyzing and obtaining each target feature information of each key identification area, and sending the target feature information to an information acquisition end.
For the convenience of understanding, for example, the scanned target object is an automobile, and the identification scene types of three information of license plate numbers, logos and automobile types of the automobile are required to be identified, and the identification terminal respectively determines key identification areas and identification frames of the license plate numbers, the logos and the automobile types through the identification scene types; the recognition terminal respectively recognizes target features in license plate numbers, car logos and car type recognition frames, analyzes the target features to obtain target feature information, and sends the target feature information to the information acquisition terminal.
The target features refer to texts, characters, sizes, shapes, sizes and other information contained in key recognition areas of license plate numbers, car logos and car types; each piece of target feature information refers to a unique determined content obtained by analyzing information such as text, characters, size, shape, size and the like by using a feature extraction algorithm; if the license plate number text in the license plate number key identification area is obtained, the character string of the license plate number is obtained after analysis by using a feature extraction algorithm.
Further, step S30 includes:
step A1, respectively identifying each target feature in each identification frame of the scanned image;
for the convenience of understanding, following the above example in step S30, the recognition terminal recognizes the target feature of the license plate number through the license plate number recognition frame, recognizes the target feature of the logo through the logo recognition frame, and recognizes the target feature of the vehicle type through the vehicle type recognition frame.
Step A2, according to each identified target feature, analyzing each target feature information contained in each target feature;
the identification terminal can analyze and obtain target feature information according to the text, characters, size, shape, size and other information of each target feature; for example, after the target features of the license plate number, the logo and the vehicle type are respectively identified, the specific license plate number, the specific logo and the specific vehicle type can be obtained by utilizing an algorithm for extracting the features according to the target features of the license plate number, the logo and the vehicle type.
Step A3, obtaining the target characteristic information of each key identification area of the scanned image;
and the identification terminal determines target feature information by analyzing the target features in each identification frame, so as to obtain each target feature information of each key identification area of the scanning image.
And step A4, acquiring the characteristic information of each target of the scanned image, and sending the characteristic information to an information acquisition end.
After the identification terminal obtains the target feature information of each key identification area of the scanned image, the identification terminal sends the target feature information to the information acquisition terminal, and the information acquisition terminal can be used for storing, deleting, controlling and the like after adopting the target feature information.
Further, step A2 further includes:
step B1, acquiring each target feature which cannot be analyzed to obtain the target feature information, and acquiring each key identification area which corresponds to the feature information which cannot be obtained;
the identification terminal acquires each target feature which cannot be analyzed according to the step A2, and each key identification area corresponding to each target feature of the target feature information cannot be analyzed, namely each key identification area which cannot acquire the feature information; the key recognition area in which the characteristic information cannot be obtained can be determined according to the artificial recognition judgment;
step B2, acquiring each key identification area on the scanned image, wherein the key identification area cannot acquire characteristic information; amplifying each key identification area in which characteristic information cannot be acquired;
as shown in fig. 5, 1 is a scanning area of a scanned image, 2 is a recognition frame, 3 is a key recognition area, a recognition terminal determines areas with target features which cannot be analyzed according to the target features which cannot be analyzed, so as to determine the key recognition areas which cannot be acquired, and then the recognition terminal performs amplification operation on the key recognition areas which cannot be acquired; of course, the identification terminal may acquire the amplifying operation performed by the user after determining the key identification area where the feature information cannot be acquired according to the manual identification judgment.
Step B3, amplifying corresponding to each identification frame according to the amplification condition of each key identification area incapable of acquiring the characteristic information;
amplifying each recognition frame to the size of each amplified key recognition area according to the amplification condition of each key recognition area incapable of acquiring the characteristic information, so that all the amplified key recognition areas fall in each amplified recognition frame; or each recognition frame of each key recognition area before amplification can be directly removed, and each recognition frame is regenerated at the periphery of each key recognition area after amplification.
And step B4, identifying each target feature in each amplified identification frame again, and analyzing each target feature information contained in each target feature.
And (3) according to the steps A1-A2, identifying each target feature in each amplified identification frame again, and analyzing each target feature information contained in each target feature.
In the embodiment, after a scanning image is acquired at an acquisition end, image scene analysis is performed on the scanning image to determine the type of the identification scene, so that each key identification area of the scanning image and each identification frame for identifying the characteristics of each key identification area are determined; identifying and analyzing the characteristics in each identification frame according to each key identification area to obtain each characteristic information contained in each key identification area; the method realizes that a plurality of characteristic information can be obtained by identifying a plurality of identification areas through one-time scanning, and the important identification areas can be amplified and displayed, so that the identification is performed after the definition of the important identification areas is ensured, and the information identification accuracy is further improved.
Further, referring to fig. 6, in a second embodiment of the multiple identification area identifying method of the present invention, based on the embodiment shown in fig. 2, step S20 includes:
step C, determining each key identification area of the scanned image according to a preset default scene, and acquiring each identification frame corresponding to each key identification area;
the preset default scene is a scene which is preset and is used for knowing the number and characteristics of the key recognition areas; after the identification terminal acquires the scanned image each time, the identification terminal directly adopts the features in the preset key identification areas to find the area with the corresponding features in the scanned image so as to serve as each key identification area of the scanned image; and then generating corresponding identification frames around the key identification areas.
Or, in order to determine each key identification area of the scanned image and obtain each identification frame corresponding to each key identification area, step S20 includes:
step S21, performing image scene analysis on the acquired scanned image, and determining the identification scene type to which the image scene of the scanned image belongs;
the identification terminal analyzes the image scene of the acquired scanning image, determines what the target object to be identified in the scanning image is, and then determines the identification scene type of the image scene of the scanning image according to the target object; the type of the identification scene to which the image scene of the scanned image belongs is determined according to different types of target objects and information to be acquired by the target objects, for example: the target object is a printer, and two pieces of information, namely the printer and the model of the printer, are identified at the same time and can be used as an identification scene type; the target object is a projection device, and two pieces of information, namely the projection device and the model of the projection device, can be identified as an identification scene type; the target object is an automobile, and two pieces of information, namely a license plate number and an automobile type of the automobile, are required to be identified, so that the target object can be used as an identification scene type; the target object is an automobile, and three pieces of information, namely license plate numbers, logos and automobile types of the automobile, are required to be identified, so that the target object can also be used as an identification scene type.
Step S22, determining each key identification area of the scanned image according to the identification scene type, and acquiring each identification frame corresponding to each key identification area.
The identification terminal determines each preset identification area according to the identification scene type, then determines an area with the characteristics of each preset identification area on the scanning image according to the characteristics of each preset identification area to serve as each key identification area of the scanning image, and then generates corresponding identification frames on the periphery of each key identification area.
Further, the step of S21 includes:
step D1, performing image scene analysis on the acquired scanning image, and determining scene-related objects in the scanning image;
the identification terminal performs image scene analysis on the acquired scanning image, and determines scene-related objects in the scanning image, wherein the scene-related objects are target objects which need to be identified by the scanning image; for example, it is determined that a scene-related object to be recognized is a printer, a car, or the like by performing image scene analysis on a scanned image.
Step D2, comparing the scene associated object with standard objects in each preset identification scene to obtain a first comparison result;
The method comprises the steps that a plurality of preset recognition scenes are stored in a recognition terminal, each standard object is provided with at least one preset recognition scene corresponding to the standard object, the recognition terminal compares scene-related objects with the standard objects in the preset recognition scenes, and according to the comparison of the characteristics of the scene-related objects with the characteristics of the standard objects, the standard object closest to the characteristics of the scene-related objects is obtained as a first comparison result; continuing with the example of step D1, for example, the scenario-related object is compared with the car, the printer, the projection device, etc. in the preset recognition scene, and the appearance and the structural feature of the scenario-related object are closest to the printer feature in the preset recognition scene, so that the scenario-related object is obtained as the printer.
Step D3, acquiring a target identification scene corresponding to the scanning image from a preset identification scene according to a first comparison result;
the identification terminal determines a standard object closest to the scene-related object according to a first comparison result, then obtains a preset identification scene corresponding to the standard object closest to the scene-related object as a target identification scene, wherein the target identification scene is actually one of the preset identification scenes, and each preset identification scene is respectively preset with a plurality of pieces of information to be identified simultaneously; for example, the standard object closest to the scene-related object is a printer, and the preset recognition scene corresponding to the standard object printer is preset to be required to simultaneously recognize two pieces of information of the printer and the printer model, so that the target recognition scene is used to simultaneously recognize the two pieces of information of the printer and the printer model.
And D4, determining the identification scene type to which the image scene of the scanned image belongs according to the target identification scene.
The identification terminal determines a plurality of pieces of information which need to be identified simultaneously according to a plurality of pieces of information which need to be identified simultaneously and are preset in the target identification scene, and the plurality of pieces of information which need to be identified simultaneously and are preset in the target identification scene correspond to the plurality of pieces of information which need to be identified simultaneously and are preset in the target identification scene one by one; determining the identification scene type of the image scene of the scanned image through a scene-related object and a plurality of pieces of information which need to be identified simultaneously; for example, it is determined that the scene-related object is an automobile, and the target recognition scene is preset to be capable of simultaneously recognizing two information of a license plate number and a vehicle type, so that the scanned image is required to be capable of simultaneously recognizing two information of the license plate number and the vehicle type, and the image scene of the scanned image is obtained to be of the type of the recognition scene, wherein the target object is the automobile, and the two information of the license plate number and the vehicle type of the automobile are required to be recognized.
Further, in order to determine the type of the identification scene to which the image scene of the scanned image belongs, the identification terminal may further send the scanned image to the cloud platform for image scene analysis, the analysis process of the cloud platform and the analysis result of the platform may refer to the analysis processes of the steps D1 to D4, which are not described herein, and then the identification terminal receives the analysis result of the platform fed back by the cloud platform; and the identification terminal determines scene related objects in the scanned image according to the platform analysis result, and the identification scene type to which the image scene of the scanned image belongs.
Further, step S22 includes:
step E1, determining each preset identification area according to the identification scene type of the image scene of the scanned image;
in order to identify a plurality of pieces of information, a plurality of identification areas are respectively preset in each preset identification scene, and the identification terminal determines each preset identification area according to the preset identification scene corresponding to the identification scene type to which the image scene of the scanned image belongs, wherein the plurality of identification areas respectively preset in the preset identification scene are all preset identification areas; for example, in order to identify a license plate number and a logo, the preset identification scene is respectively preset with identification areas of the license plate number and the logo, and the preset identification areas are determined as identification areas of the license plate number and the logo.
E2, correspondingly determining the specific positions of each key identification area of the scanned image according to each preset identification area;
after the identification terminal determines each preset identification area, the identification terminal obtains each characteristic area of the scanning image through comparing each characteristic of each preset identification area with each characteristic of the scanning image, each characteristic area of the scanning image is used as each key identification area of the scanning image, and the position of each characteristic area of the scanning image is the specific position of each key identification area of the scanning image.
For example, the preset recognition area of the license plate number in the preset recognition scene is characterized by being composed of letters or numbers with blue or white or yellow bottom and obvious surface; correspondingly, in the scanned image, the region with the blue or white or yellow bottom and obvious letters or numbers on the surface is an important identification region, and the position of the corresponding feature is a specific position of the important identification region of the scanned image.
And E3, generating each identification frame of the scanned image at the periphery of each key identification area according to the specific position of each key identification area.
The identification terminal generates each identification frame of the scanned image at the periphery of each key identification area according to the specific position of each key identification area in the scanned image, so that all the key identification areas fall into the range of each identification frame, as shown in fig. 5; the identification terminal can identify and analyze the characteristic information of each key identification area by respectively identifying the areas in the range of each identification frame.
In this embodiment, the identification terminal analyzes and determines the identification scene type of the image scene of the scanned image, or the identification terminal transmits the scanned image to the cloud platform for analysis and determines the identification scene type of the image scene of the scanned image, so that different image scenes can acquire the preset identification scene type, and the identification scene type can be determined more accurately and rapidly.
Further, referring to fig. 7, in a third embodiment of the multiple identification area identifying method of the present invention, based on the embodiment shown in fig. 2, the step S22 includes:
step F1, reading each identification area recorded and stored for a plurality of times according to the identification scene type of the image scene of the scanned image;
before the scanned image is identified this time, there may be regions that have been selected multiple times by the user in the preset identification scene corresponding to the type of the identification scene to which the image scene of the scanned image belongs, and the identification terminal takes each region that has been selected multiple times by the user as each identification region of the preset identification scene, and records and stores the regions; when the identification terminal identifies that the area which is selected by the user for many times exists in the preset identification scene, each identification area which is recorded and stored for many times is respectively read out.
Step F2, correspondingly determining the specific positions of each key identification area of the scanned image according to each identification area which is stored and recorded for a plurality of times;
after the identification terminal reads each identification area which is stored and recorded for many times, the identification terminal compares each characteristic of each identification area which is stored and recorded for many times with each characteristic of the scanning image to obtain each characteristic area of the scanning image, each characteristic area of the scanning image is used as each key identification area of the scanning image, and the position of each characteristic area of the scanning image is the specific position of each key identification area of the scanning image.
Step F3, generating each identification frame of the scanned image at the periphery of each key identification area according to the specific position of each key identification area;
and the identification terminal generates each identification frame of the scanning image at the periphery of each key identification region according to the specific position of each key identification region in the scanning image, so that all the key identification regions fall into the range of each identification frame, and the identification terminal can identify and analyze each characteristic information of each key identification region by respectively identifying the regions in the range of each identification frame.
In this embodiment, the recognition area selected multiple times according to the user history can be used as the key recognition area, so that the selection of the key recognition area is closer to the user demand, and the recognition process is more personalized; in the preset identification scene corresponding to the identification scene type to which the graphic scene of the scanned image belongs, when no preset identification area exists, each key identification area can be constructed by utilizing each identification area which is recorded and stored for many times, and then identification is carried out, so that the key identification area does not need to be selected each time, and the determination of the key identification area is quickened.
Further, referring to fig. 8, according to a fourth embodiment of the multiple identification area identifying method of the present invention, based on the embodiment shown in fig. 2, the step S20 further includes:
step S50, a first identification frame which is manually deleted by a user for the scanned image is obtained;
the user can remove the identification frames corresponding to the key identification areas which do not need to acquire the characteristic information on the scanning image according to the requirements, the identification terminal acquires the deleting operation of the user on each identification frame on the scanning image, and when the deleting operation is acquired, the identification frames which need to be deleted are recorded as the first identification frames.
Step S60, a second identification frame which is manually added to the scanned image by a user is obtained;
the user can increase the identification frames corresponding to the areas on the scanned image according to the requirements so as to acquire more required characteristic information, the identification terminal acquires the user increasing operation of increasing each identification frame on the scanned image, and when the increasing operation is acquired, the identification frames which need to be increased are recorded as second identification frames.
And step S70, removing the first identification frame and adding the second identification frame in the acquired identification frames of the scanning image to obtain the final identification frames of the scanning image.
After the identification terminal obtains each identification frame of the scanned image, removing a first identification frame which is considered to be removed by a user according to requirements, adding a second identification frame which is considered to be added by the user according to requirements, and obtaining each identification frame of the scanned image which finally needs to be subjected to identification characteristic information.
In this embodiment, each identification frame can be manually added or removed according to the user requirement, the identification frame corresponding to the key identification area which does not need to identify the feature information is removed, and the corresponding identification frame is added to the area which needs to identify the feature information, so that the problem that the unnecessary feature information is identified is avoided, and meanwhile, the problem that the feature information which needs to be identified cannot be identified is solved, so that the identification is more personalized, and different requirements of users are met.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein a plurality of identification area identification programs are stored on the computer readable storage medium, and the steps of the method for identifying the plurality of identification areas are realized when the plurality of identification area identification programs are executed by a processor.
The specific embodiments of the computer readable storage medium of the present invention may refer to the embodiments of the method for identifying a plurality of identification areas, which are not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (6)

1. A plurality of identification area identification methods, characterized in that the plurality of identification area identification methods include:
acquiring a scanning image from an acquisition end;
determining each key identification area of the scanned image, and acquiring each identification frame corresponding to each key identification area;
identifying each target feature in each identification frame of the scanned image, analyzing to obtain each target feature information of each key identification area, and sending the target feature information to an information acquisition end;
the step of determining each key identification area of the scanned image and acquiring each identification frame corresponding to each key identification area further comprises the steps of:
performing image scene analysis on the acquired scanned image, and determining the identification scene type to which the image scene of the scanned image belongs;
determining each key identification area of the scanned image according to the identification scene type, and acquiring each identification frame corresponding to each key identification area;
The step of determining each key identification area of the scanned image according to the identification scene type and acquiring each identification frame corresponding to each key identification area comprises the following steps:
determining each preset identification area according to the identification scene type of the image scene of the scanned image;
according to each preset identification area, correspondingly determining the specific position of each key identification area of the scanned image;
generating each identification frame of the scanning image at the periphery of each key identification area according to the specific position of each key identification area;
the step of determining each key identification area of the scanned image according to the identification scene type and acquiring each identification frame corresponding to each key identification area comprises the following steps:
reading each identification area recorded and stored for a plurality of times according to the identification scene type of the image scene of the scanned image;
according to each identification area which is stored and recorded for a plurality of times, correspondingly determining the specific position of each key identification area of the scanned image;
generating each identification frame of the scanning image at the periphery of each key identification area according to the specific position of each key identification area;
The step of determining each key identification area of the scanned image and acquiring each identification frame corresponding to each key identification area further comprises the following steps:
acquiring a first identification frame which is manually deleted by a user for the scanned image;
acquiring a second identification frame which is manually added to the scanned image by a user;
and removing the first identification frame and adding the second identification frame in the acquired identification frames of the scanned image to obtain the final identification frames of the scanned image.
2. The method of claim 1, wherein the step of performing image scene analysis on the acquired scanned image to determine the type of the identified scene to which the image scene of the scanned image belongs comprises:
performing image scene analysis on the acquired scanning image to determine scene-related objects in the scanning image;
comparing the scene associated object with standard objects in each preset identification scene to obtain a first comparison result;
acquiring a target identification scene corresponding to the scanning image from a preset identification scene according to a first comparison result;
and determining the identification scene type to which the image scene of the scanned image belongs according to the target identification scene.
3. The method for identifying a plurality of identification areas according to claim 1, wherein the step of identifying each target feature in each identification frame of the scanned image, analyzing each target feature information of each key identification area, and transmitting the target feature information to the information acquisition terminal comprises:
identifying each target feature within each identification frame of the scanned image;
according to the identified target features, respectively analyzing target feature information contained in the target features;
obtaining the target characteristic information of each key identification area of the scanned image;
and acquiring the characteristic information of each target of the scanned image, and sending the characteristic information to an information acquisition end.
4. The method of claim 3, wherein the step of analyzing the respective target feature information included in the respective target features based on the respective identified target features further comprises:
acquiring each target feature which cannot be analyzed to obtain the target feature information, and acquiring each key identification area corresponding to the feature information which cannot be obtained;
acquiring each key identification area on the scanned image, wherein the key identification areas cannot acquire characteristic information; amplifying each key identification area in which characteristic information cannot be acquired;
Amplifying corresponding to each identification frame according to the amplification condition of each key identification area incapable of acquiring the characteristic information;
and identifying each target feature in each amplified identification frame again, and analyzing each target feature information contained in each target feature.
5. An identification terminal, characterized in that the identification terminal comprises: a memory, a processor and a plurality of identification area identifying programs stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the plurality of identification area identifying methods of any one of claims 1 to 4.
6. A computer-readable storage medium, characterized in that a plurality of identification area identifying programs are stored thereon, which when executed by a processor, implement the steps of the plurality of identification area identifying methods according to any one of claims 1 to 4.
CN202011419732.6A 2018-09-20 2018-09-20 Method for identifying multiple identification areas, identification terminal, and readable storage medium Active CN112560840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011419732.6A CN112560840B (en) 2018-09-20 2018-09-20 Method for identifying multiple identification areas, identification terminal, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811099817.3A CN109271982B (en) 2018-09-20 2018-09-20 Method for identifying multiple identification areas, identification terminal and readable storage medium
CN202011419732.6A CN112560840B (en) 2018-09-20 2018-09-20 Method for identifying multiple identification areas, identification terminal, and readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811099817.3A Division CN109271982B (en) 2018-09-20 2018-09-20 Method for identifying multiple identification areas, identification terminal and readable storage medium

Publications (2)

Publication Number Publication Date
CN112560840A CN112560840A (en) 2021-03-26
CN112560840B true CN112560840B (en) 2023-05-12

Family

ID=65197736

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811099817.3A Active CN109271982B (en) 2018-09-20 2018-09-20 Method for identifying multiple identification areas, identification terminal and readable storage medium
CN202011419732.6A Active CN112560840B (en) 2018-09-20 2018-09-20 Method for identifying multiple identification areas, identification terminal, and readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201811099817.3A Active CN109271982B (en) 2018-09-20 2018-09-20 Method for identifying multiple identification areas, identification terminal and readable storage medium

Country Status (1)

Country Link
CN (2) CN109271982B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580625A (en) * 2020-12-11 2021-03-30 海信视像科技股份有限公司 Display device and image content identification method
WO2022012299A1 (en) 2020-07-14 2022-01-20 海信视像科技股份有限公司 Display device and person recognition and presentation method
CN112860060B (en) * 2021-01-08 2022-07-01 广州朗国电子科技股份有限公司 Image recognition method, device and storage medium
CN113393468A (en) * 2021-06-28 2021-09-14 北京百度网讯科技有限公司 Image processing method, model training device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679189A (en) * 2012-09-14 2014-03-26 华为技术有限公司 Method and device for recognizing scene
CN105046196A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Front vehicle information structured output method base on concatenated convolutional neural networks
US9286541B1 (en) * 2014-09-12 2016-03-15 Amazon Technologies, Inc. Fast multipass underline removal for camera captured OCR
CN108235816A (en) * 2018-01-10 2018-06-29 深圳前海达闼云端智能科技有限公司 Image recognition method, system, electronic device and computer program product

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202163114U (en) * 2011-08-08 2012-03-14 天津威旭科技有限公司 Multifunctional bar code scanning and printing integrated machine
CN104916034A (en) * 2015-06-09 2015-09-16 普联软件股份有限公司 Bill recognition system and recognition method based on intervenable template
CN104916035A (en) * 2015-06-09 2015-09-16 普联软件股份有限公司 Bill recognition system and recognition method based on painting technology
WO2017087568A1 (en) * 2015-11-17 2017-05-26 Eman Bayani A digital image capturing device system and method
CN107516095A (en) * 2016-06-16 2017-12-26 阿里巴巴集团控股有限公司 A kind of image-recognizing method and device
KR102564267B1 (en) * 2016-12-01 2023-08-07 삼성전자주식회사 Electronic apparatus and operating method thereof
CN107358226A (en) * 2017-06-23 2017-11-17 联想(北京)有限公司 The recognition methods of electronic equipment and electronic equipment
CN107491709A (en) * 2017-08-29 2017-12-19 努比亚技术有限公司 A kind of recognition methods of code figure, terminal and computer-readable recording medium
CN108229463A (en) * 2018-02-07 2018-06-29 众安信息技术服务有限公司 Character recognition method based on image
CN108446693B (en) * 2018-03-08 2020-05-12 上海扩博智能技术有限公司 Marking method, system, equipment and storage medium of target to be identified

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679189A (en) * 2012-09-14 2014-03-26 华为技术有限公司 Method and device for recognizing scene
US9286541B1 (en) * 2014-09-12 2016-03-15 Amazon Technologies, Inc. Fast multipass underline removal for camera captured OCR
CN105046196A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Front vehicle information structured output method base on concatenated convolutional neural networks
CN108235816A (en) * 2018-01-10 2018-06-29 深圳前海达闼云端智能科技有限公司 Image recognition method, system, electronic device and computer program product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图像多特征的车辆对象识别方法研究;陈伊;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20150115;67-82 *
陈伊.基于图像多特征的车辆对象识别方法研究.《中国优秀硕士学位论文全文数据库 (信息科技辑)》.2015, *

Also Published As

Publication number Publication date
CN109271982B (en) 2020-11-10
CN112560840A (en) 2021-03-26
CN109271982A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN112560840B (en) Method for identifying multiple identification areas, identification terminal, and readable storage medium
KR101880004B1 (en) Method and apparatus for identifying television channel information
US8958647B2 (en) Registration determination device, control method and control program therefor, and electronic apparatus
CN110807314A (en) Text emotion analysis model training method, device and equipment and readable storage medium
CN107943811B (en) Content publishing method and device
CN110728687B (en) File image segmentation method and device, computer equipment and storage medium
JP2014131277A (en) Document image compression method and application of the same to document authentication
CN107871001B (en) Audio playing method and device, storage medium and electronic equipment
US11189183B2 (en) Intelligent voice interaction method, device and computer readable storage medium
CN109194689B (en) Abnormal behavior recognition method, device, server and storage medium
WO2019041442A1 (en) Method and system for structural extraction of figure data, electronic device, and computer readable storage medium
CN112784220B (en) Paper contract tamper-proof verification method and system
CN111582134A (en) Certificate edge detection method, device, equipment and medium
CN111046632A (en) Data extraction and conversion method, system, storage medium and electronic equipment
US8218876B2 (en) Information processing apparatus and control method
CN108108646B (en) Bar code information identification method, terminal and computer readable storage medium
CN113438526A (en) Screen content sharing method, screen content display device, screen content equipment and storage medium
CN110992930A (en) Voiceprint feature extraction method and device, terminal and readable storage medium
CN111667602A (en) Image sharing method and system for automobile data recorder
CN107784328B (en) German old font identification method and device and computer readable storage medium
CN110706221A (en) Verification method, verification device, storage medium and device for customizing pictures
CN113507571B (en) Video anti-clipping method, device, equipment and readable storage medium
CN110929725B (en) Certificate classification method, device and computer readable storage medium
CN114303352B (en) Push content processing method and device, electronic equipment and storage medium
CN108363937B (en) Two-dimensional code scanning method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant