CN108875731B - Target identification method, device, system and storage medium - Google Patents
Target identification method, device, system and storage medium Download PDFInfo
- Publication number
- CN108875731B CN108875731B CN201711457414.7A CN201711457414A CN108875731B CN 108875731 B CN108875731 B CN 108875731B CN 201711457414 A CN201711457414 A CN 201711457414A CN 108875731 B CN108875731 B CN 108875731B
- Authority
- CN
- China
- Prior art keywords
- image
- region
- identified
- target
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a target identification method, a target identification device, a target identification system and a storage medium. The method comprises the following steps: acquiring an image to be identified; detecting the quality of the image to be identified to output a quality evaluation graph, wherein the pixel value of a pixel in the quality evaluation graph represents the identification accuracy of the corresponding content of the pixel; identifying a region of interest in the image to be identified, wherein the region of interest is a sub-region of a region of an object in the image to be identified; and determining the identification accuracy of the region of interest according to the quality evaluation graph. The technical scheme of the target identification can enable a user to more clearly know the target misidentification condition possibly caused by the quality problem of the image to be identified, so that the reliability and the adaptability of the target identification system are greatly improved. The user experience is significantly improved.
Description
Technical Field
The present invention relates to the field of pattern recognition technologies, and in particular, to a method, an apparatus, a system, and a storage medium for target recognition.
Background
With the continuous development of image processing technology and the continuous enhancement of computing power of computing equipment, more and more application scenes need to identify targets in images.
For example, in the business of finance, friend making, communication and other industries, identity authentication is an important module, and the task of authenticating the identity of a user is undertaken so as to meet the requirements of real-name systems and the like. The resident identification card is used as a valid certificate representing the personal identity, and the resident identification card in the image can be generally identified to obtain information for resident identification verification. In some existing services, identification card recognition is performed manually. However, the process of manual identification and entry is time consuming and prone to error. In other services, systems have appeared which allow automatic identification of the identity card in the image. The systems acquire images to be identified including identity cards through image acquisition equipment such as cameras on smart phones, tablet computers and some special equipment, and automatically position and identify identity card information such as names, identity card numbers and validity periods in the identity cards.
However, the above conventional object recognition system has disadvantages in both recognition accuracy and adaptability. Particularly, for low-quality images to be recognized, such as images with strong light-reflecting parts in key areas of the identity card or images with blocked validity periods of the identity card, the recognition result of the existing system is not reliable, which causes great difficulty in further processing the recognition result.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides a target identification method, a device, a system and a storage medium.
According to an aspect of the present invention, there is provided a target recognition method, including:
acquiring an image to be identified;
detecting the quality of the image to be identified to output a quality evaluation graph, wherein the pixel value of a pixel in the quality evaluation graph represents the identification accuracy of the corresponding content of the pixel;
identifying a region of interest in the image to be identified, wherein the region of interest is a sub-region of a region of an object in the image to be identified; and
and determining the identification accuracy of the region of interest according to the quality evaluation graph.
Illustratively, the acquiring the image to be recognized includes:
acquiring an original image;
performing target recognition on the original image to determine a target area of the target in the original image; and
and acquiring the image to be identified based on the image of the target area.
Illustratively, the acquiring the image to be recognized based on the image of the target area includes:
determining a transformation matrix according to the positioning information of the target area; and
and transforming the image of the target area into the image to be recognized with a standard shape and a first standard size based on the transformation matrix.
Illustratively, the performing object recognition on the original image to determine an object region of the object in the original image includes: performing target recognition on the original image by using a second Convolutional Neural Network (CNN) to determine a target region of the target in the original image.
Illustratively, the second CNN is trained by using a set of second training images, wherein the second training images are marked with positions of targets.
Illustratively, prior to the target recognizing the original image, the method further comprises: scaling the original image to a second standard size.
Illustratively, the detecting the quality of the image to be recognized to output a quality evaluation map includes: and detecting the quality of the image to be identified by using the first CNN so as to output the quality evaluation graph.
Illustratively, the first CNN is obtained by training with a set of first training images, where distortion regions are marked on the first training images, where the distortion regions are regions in which the recognition accuracy of corresponding contents of pixels in the first training images is lower than a preset threshold.
Illustratively, the determining the identification accuracy of the region of interest according to the quality evaluation graph includes:
determining a corresponding region of the region of interest in the quality evaluation graph according to the position corresponding relation between the quality evaluation graph and the image to be identified; and
and determining the identification accuracy of the region of interest according to the pixel values in the corresponding region.
Illustratively, the target is an identity card and the region of interest is a text region.
According to another aspect of the present invention, there is also provided an object recognition apparatus, including:
the acquisition module is used for acquiring an image to be identified;
the detection module is used for detecting the quality of the image to be identified so as to output a quality evaluation graph, wherein the pixel value of a pixel in the quality evaluation graph represents the identification accuracy of the corresponding content of the pixel;
the identification module is used for identifying a region of interest in the image to be identified, wherein the region of interest is a sub-region of a region of an object in the image to be identified; and
and the determining module is used for determining the identification accuracy of the region of interest according to the quality evaluation graph.
According to yet another aspect of the present invention, there is also provided an object recognition system comprising a processor and a memory, wherein the memory has stored therein computer program instructions for executing the above object recognition method when executed by the processor.
According to yet another aspect of the present invention, there is also provided a storage medium having stored thereon program instructions for executing the above object recognition method when executed.
According to the target identification method, the target identification device, the target identification system and the storage medium, the target identification is assisted through the quality detection of the image to be identified, and the identification accuracy of the region of interest of the target can be improved. The user can more clearly know the target misidentification condition possibly caused by the quality problem of the image to be identified, so that the reliability and the adaptability of the target identification system are greatly improved. The user experience is significantly improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 illustrates a schematic block diagram of an example electronic device for implementing a target recognition method and apparatus in accordance with embodiments of the present invention;
FIG. 2 shows a schematic flow diagram of a target recognition method according to one embodiment of the invention;
FIG. 3 shows a schematic flow diagram for acquiring an image to be identified according to one embodiment of the invention;
FIG. 4 shows a schematic flow diagram for acquiring an image to be identified based on an image of a target area, according to one embodiment of the invention;
FIG. 5 shows a schematic block diagram of an object recognition arrangement according to one embodiment of the present invention; and
FIG. 6 shows a schematic block diagram of an object recognition system according to one embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It should be understood that the described embodiments are only some of the embodiments of the present invention, and not all of the embodiments of the present invention, and it should be understood that the present invention is not limited by the exemplary embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described in the present application without inventive step, shall fall within the scope of protection of the present invention.
First, an example electronic device 100 for implementing an object recognition method and apparatus according to an embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104. Optionally, the electronic device 100 may also include an input device 106, an output device 108, and an image capture device 110, which may be interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU), a Graphics Processor (GPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images and/or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, etc.
The image capture device 110 may capture images and store the captured images in the storage device 104 for use by other components. The image capture device 110 may be a camera. It should be understood that the image capture device 110 is merely an example, and the electronic device 100 may not include the image capture device 110. In this case, the image to be recognized may be captured by other image capturing devices and the captured image may be transmitted to the electronic apparatus 100.
Exemplary electronic devices for implementing the object recognition methods and apparatus according to embodiments of the present invention may be implemented on devices such as personal computers or remote servers, for example.
The target identification method, the device, the system and the storage medium according to the embodiment of the invention can be applied to various target identification fields. The object may be, for example, a bank card, an identification card, a work card, a passport, a house page, a business license, a pedestrian, or any other object that is desired to be identified. The following describes, by way of example, a method, an apparatus, a system, and a storage medium for object identification provided by the present invention, which should not be construed as limiting the present invention.
Next, an object recognition method according to an embodiment of the present invention will be described with reference to fig. 2. FIG. 2 shows a schematic flow diagram of a target recognition method 200 according to one embodiment of the present invention. As shown in fig. 2, the method 200 includes the following steps.
And step S210, acquiring an image to be identified.
The image to be recognized may be any suitable image that requires object recognition. As previously mentioned, the target may be any object that the user desires to identify, such as a bank card, identification card, or the like. The image to be recognized may be an original image acquired by an image acquisition device such as a camera, or may be an image obtained after preprocessing the original image. The preprocessing operation may include all operations for more clearly identifying the target. For example, the preprocessing operation may include a denoising operation such as filtering.
The image to be recognized may be captured by an image capture device 110 (e.g., a camera) included in the electronic device 100 and transmitted to the processor 102 for processing. The image to be recognized may also be captured by a client device (such as an image capture device including a camera) and ultimately transmitted to the electronic device 100 for processing by the processor 102 of the electronic device 100.
Furthermore, the raw image may be captured by an image capture device 110 (e.g., a camera) included in the electronic device 100 or by a client device (such as an image capture device including a camera) and transmitted to the processor 102 for pre-processing to obtain the image to be recognized, and then still for subsequent processing by the processor 102.
And step S220, detecting the quality of the image to be identified acquired in the step S210 to output a quality evaluation graph. The pixel value of a pixel in the quality evaluation graph represents the recognition accuracy of the corresponding content of the pixel.
In the embodiment of the application, the quality of the image to be recognized is related to the recognition accuracy of the content in the image to be recognized. The more easily and accurately the content in the image to be recognized is recognized, the higher the quality of the image to be recognized is; otherwise, the other way round. For example, in an image to be recognized, a target is intentionally or unintentionally occluded by a person, which will certainly affect the recognition accuracy of the target. Specifically, for example, the holder of the identification card intentionally blocks the identification number in the identification card, which seriously affects the identification accuracy of the identification card. Therefore, the quality of the image to be recognized can be said to be low. For another example, in an area where an object of an image to be recognized is located, a light reflection phenomenon occurs due to strong light irradiation, and thus, a large number of pixels close to white appear in the area, which cannot reflect the original purpose of a photographed article. This also inevitably makes it difficult to accurately identify the content in the area, and therefore, the quality of the image to be identified is also said to be low.
In the detection result, the quality evaluation chart, the pixel value of which indicates the recognition accuracy of the corresponding content of the pixel. The recognition accuracy is estimated based on the quality of the image to be recognized. Alternatively, the quality evaluation chart may be a grayscale image. For example, in the grayscale image, the larger the pixel value is, the lower the recognition accuracy of the corresponding content of the pixel is; the smaller the pixel value, the higher the recognition accuracy of the corresponding content of the pixel. The pixels/areas in the quality evaluation graph correspond to the pixels/areas in the image to be recognized in a one-to-one mode. Illustratively, the size of the quality assessment chart is the same as the size of the image to be recognized.
Alternatively, the step S220 may utilize a neural network to detect the quality of the image to be recognized, such as CNN. In particular, the CNN may be a full convolutional neural network, which includes a plurality of convolutional layers. And inputting the image to be identified into the CNN, and outputting the quality evaluation graph by the CNN. The CNN is a network capable of learning autonomously, the quality of the image to be identified is detected by using the CNN, and the accuracy and reliability of quality detection can be greatly improved. And further improve the accuracy and reliability of target identification.
Step S230, identifying a region of interest in the image to be identified acquired in step S210. It is understood that the target occupies a certain area in the image to be recognized. For example, the target occupies a part or the whole of the area in the image to be recognized. The region of interest is a sub-region of the region occupied by the object in the image to be recognized. In the region of interest, specific content that the user desires to identify is included. For example, for the example where the target is an identification card, the specific content that the user desires to identify may be textual content such as an identification number in the identification card. In this step, the position of the region of interest in the image to be recognized is determined.
In particular, for identification card recognition, the user may be more interested in the text in the identification card, such as the identification card number or the validity period of the identification card, and have no or little interest in the background pattern. The words in the identification card can be said to be the specific contents that the user desires to identify in the identification card identification. Therefore, the region of interest may be a text region in the image to be recognized, such as a region where an identification number or an expiration date of an identification card is located. For another example, for pedestrian recognition, the user may be more interested in the face of a pedestrian. In this case, the region of interest is the region in the image to be recognized where the face is located.
Step S230 may determine the position of the region of interest by using an absolute coordinate method. In one example, various recognition models can be employed for implementation. The recognition model may be trained using a large number of training images. The region of interest of the target is labeled in the training image. In another example, the location of the region of interest may be determined based on the locations of other features in the target and the relative positional relationship of the region of interest to the features.
Step S240, determining the identification accuracy of the region of interest identified in step S230 according to the quality evaluation map obtained in step S220. The accuracy of the identification of the region of interest may more accurately represent the accuracy of the identification of the target. The identification accuracy of the region of interest can be predicted from the pixel values of the corresponding region of the region of interest in the quality assessment map.
According to one embodiment of the invention, firstly, the corresponding region of the region of interest in the quality evaluation chart is determined according to the position corresponding relationship between the quality evaluation chart and the image to be identified. Then, the identification accuracy of the region of interest is determined from the pixel values in the corresponding region. In one example, the larger the pixel value in the quality-assessment map is, the lower the recognition accuracy of the corresponding content of the pixel is. The identification accuracy of the region of interest can be determined from the maximum pixel value of the region of interest in the corresponding region in the quality assessment map. In another example, the identification accuracy of the region of interest may be determined from the average pixel value in the corresponding region.
Through the step S240, an error in predicting the target recognition accuracy according to the overall image quality is effectively avoided. For example, in an identification card image, a shade or a strong light reflection may occur in an image area outside the identification card or a background area (an area without text content) in the identification card, but this has no influence on the accuracy of identification card recognition. If the image is judged according to the overall quality of the image, an inaccurate judgment result is obtained. In other words, in step S240, only the image quality of the portion where the region of interest is located is considered, so that more accurate target recognition accuracy can be provided.
After the above target identification process is completed, the specific content in the region of interest may be further identified based on the obtained result. For example, for identification card identification, the identification card number of the identification card can be further identified. This process is understood by those of ordinary skill in the art and will not be described herein for the sake of brevity.
In the above object recognition method, the recognition accuracy of the region of interest of the object is provided for the user to refer to. This effectively avoids an error in prejudging the recognition accuracy based on the quality of the entire image. The user can more clearly know the target misidentification condition possibly caused by the local or whole quality problem of the image to be identified, so that the reliability and the adaptability of the target identification system are greatly improved. The user experience is significantly improved.
It will be appreciated by those skilled in the art that the above object recognition method is merely exemplary and that various changes can be made to achieve the above technical effects. For example, step S230 may be performed prior to step S220.
Illustratively, the object recognition method according to embodiments of the present invention may be implemented in a device, apparatus, or system having a memory and a processor. The target identification method according to the embodiment of the invention can be deployed at an image acquisition end. For example, it may be deployed in an image capturing end in a banking system to identify a customer's identification card in real time. Alternatively, the target identification method according to the embodiment of the present invention may also be distributively deployed at the server side (or cloud side) and the client side. For example, an image may be collected at a client, and the client transmits the collected image to a server (or a cloud), so that the server (or the cloud) performs target recognition.
In one embodiment, the image to be recognized in the method 200 shown in FIG. 2 is an image obtained after processing an original image. FIG. 3 shows a schematic flow diagram for acquiring an image to be recognized according to one embodiment of the present invention. As shown in fig. 3, the aforementioned step S210 may include the following steps.
In step S211, an original image is acquired.
The original image may be an original image captured by an image capturing device such as a camera. For example, the raw image may be captured by an image capture device 110 (e.g., a camera) included in the electronic device 100 and transmitted to the processor 102 for processing. The raw image may also be captured by a client device (such as an image capture device including a camera) and ultimately sent to the electronic device 100 for processing by the processor 102 of the electronic device 100.
Optionally, the original image may also be a scaled image. For example, the size of the original image is scaled to a standard size. The standard size is, for example, 256 pixels wide and 160 pixels high.
In step S212, the original image (which may be unscaled or scaled) acquired in step S211 is subject to object recognition to determine an object region of the object in the original image. I.e. the position of the object in the original image is determined.
Taking identification of an identification card as an example, in step S212, the position of the identification card in the original image, i.e., the target area, is determined. In one example, the location of the identification card in the original image may be defined by its center coordinate in the original image, its width and height. It will be appreciated that in this example, the position of the identification card in the original image needs to be relatively correct. In an application scenario of a banking system, a shallow slot for accommodating an identity card may be provided at the image capturing end to limit a customer from placing the identity card at the specific location. Therefore, the identity card in the original image acquired by the image acquisition end is relatively correct. The method brings convenience to subsequent target identification, and improves the speed and accuracy of target identification. In another example, the location of the identification card in the original image may be defined by four points in the original image, which correspond to four intersections of four edges of the identification card, respectively. In this example, there is no limitation on the position of the identification card in the image, and thus the area where the identification card is located may be any quadrangle, such as a rectangle, a trapezoid, an irregular quadrangle, and the like. Therefore, the method can be applied to various original images, and the adaptability and the robustness of the target recognition system are obviously improved.
Alternatively, the step S212 may use a neural network to perform target recognition, such as CNN, to determine the target region of the target in the original image. The CNN may include a plurality of convolutional layers and a fully-connected layer at a last layer. The original image is input into the CNN, which outputs the above-mentioned information about the target region, for example, in a regression (regression) manner. In the above example of identification card recognition, the CNN may output the center coordinates, width and height of the identification card in the original image, or the coordinates of four vertices, etc. The CNN is used for identifying the target in the original image, so that the precision and the reliability of quality detection can be greatly improved. And further improve the accuracy and reliability of target identification.
For the case of object recognition of the original image by using the CNN, if the original image is scaled to a standard size, the CNN is more convenient to process, and the speed and accuracy of object recognition by the CNN are further improved.
Step S213, acquiring the aforementioned image to be recognized based on the image in the target region determined in step S212.
In one example, an image in the target region may be extracted from the original image, and the extracted image may be taken as the image to be recognized.
In another example, an image in the target region may be extracted from the original image, the extracted image may be subjected to correction processing, and then the image after the correction processing may be taken as an image to be recognized. The image to be recognized obtained through correction processing has a standard shape and a standard size, and due to the fact that the positions of the targets in different images and the shapes of the regions where the targets are located may be different, the accuracy of recognition of the regions of interest in the image to be recognized can be improved by correcting the extracted images. Alternatively, the extracted image may be corrected according to the positioning information of the target area. Fig. 4 shows a schematic flow diagram for acquiring an image to be recognized based on an image of a target area according to another example of the present invention. As shown in fig. 4, first, a transformation matrix is determined from the positioning information of the target area. Then, based on the transformation matrix, the image of the target area is transformed into an image to be recognized having a standard shape and a standard size. As described in step S212, the target area may be an arbitrary quadrangle. The transformation matrix is used to transform the image of the target area into an image of a standard shape, such as a rectangle of a particular size. Specifically, the transformed image to be recognized may be a rectangular image having a width of 540 pixels and a height of 384 pixels. It will be appreciated that by this transformation operation, the shape of the target region, such as the region of interest, is also transformed accordingly. The transformation here can be implemented using projective transformation.
The images of the target area in any shape are unified into the standard shape and the standard size through the up-conversion, so that the subsequent identification of the interested area and the identification accuracy of the interested area determined according to the quality evaluation graph are facilitated, the processing speed and the accuracy of the subsequent steps are improved, and the speed and the accuracy of the whole target identification process are further ensured. For example, after the regions of interest in the identity card, such as the region where the identity card number is located, are fixed in position and the regions of the identity card are unified into a standard shape and a standard size, the regions of interest in the image to be identified can be identified directly according to the actual positions of the regions of interest in the identity card. Specifically, the size of the identification card is 85.6mm 54mm, and the position of the identification number in the identification card is fixed. From this actual position, a region of interest in the image to be identified can be identified. The recognition accuracy is guaranteed, and meanwhile, the recognition calculation is simplified.
The content of the image to be recognized determined according to the flowchart shown in fig. 3 may only include the target itself, so that interference of a useless background is avoided in subsequent target recognition, the accuracy of target recognition is remarkably improved, the data to be processed is reduced, and the target recognition speed is improved.
According to an embodiment of the present invention, the CNN for performing target recognition on an original image and/or the CNN for detecting the quality of an image to be recognized to output a quality evaluation graph are obtained by training with a set of training images. For convenience of description, CNN for performing target recognition on an original image is simply referred to as recognition CNN, and CNN for detecting the quality of an image to be recognized to output a quality evaluation chart is simply referred to as detection CNN.
And the positions of the targets are marked on the training images for training and identifying the CNN. For example, in the example of identification card recognition, the training image of the CNN used to train identification card recognition is marked with the location of the identification card therein, e.g., four sides of the identification card or the intersection of four sides. Optionally, the training image is an image scaled to a standard size. For example 256 pixels wide and 160 pixels high.
The training image used for training detection CNN is labeled with a distortion region. The distortion region is a region in which the recognition accuracy of the corresponding content of the pixels in the training image is lower than a preset threshold. The accuracy of the identification referred to herein may be empirically estimated by a skilled person. In general, a distortion region is a region where the quality of an image is low, for example, a region of high exposure, a region of low definition, and a region that is blocked, and the like. Since the image quality of these regions is low, the recognition of the target will be adversely affected, and therefore, the recognition accuracy of the corresponding content of the region is correspondingly low.
Alternatively, the training image for training the detection CNN is acquired in the following manner. First, an original image is acquired. Then, the original image is subjected to target recognition to determine a target area of a target in the original image. This step may be implemented using recognition CNNs. Finally, the training image is acquired based on the image of the target area. In one example, the training image may be obtained by directly extracting an image of the target region and labeling a distortion region therein. In another example, a transformation matrix is determined from the location information of the target area; transforming the image of the target area into an image having a standard shape and a standard size based on the transformation matrix; labeling a distortion region on the transformed image to thereby obtain the training image. The above-mentioned steps of the process of acquiring the training images are similar to the steps of the process of acquiring the images to be recognized, and for brevity, are not described again here.
The parameters of CNN are first initialized randomly or with other networks that have been trained. In the case of initializing with other networks that have been trained, a part of the networks may be selected as a part of the CNN. The CNN is then trained using the labeled training images. In the training process, a part of parameters in the CNN may be fixed so as not to participate in the training. The parameters of each convolution unit in the CNN can be optimized by a back propagation algorithm during the training process.
The performance of the CNN obtained through training is better, and further, the performance of the target recognition system can be improved.
According to another embodiment of the present invention, the CNN for performing the target recognition on the original image and/or the CNN for detecting the quality of the image to be recognized to output the quality evaluation chart are pre-stored in the target recognition system.
According to another aspect of the invention, an object recognition device is also provided. Fig. 5 shows a schematic block diagram of an object recognition arrangement 500 according to an embodiment of the present invention.
As shown in fig. 5, the apparatus 500 includes an acquisition module 510, a detection module 520, an identification module 530, and a determination module 540. The respective modules may perform the respective steps/functions of the object recognition method described above, respectively. Only the main functions of the components of the device 500 are described below, and details that have been described above are omitted. It is understood that the target may be an identification card, a bank card, a pedestrian, or the like.
The obtaining module 510 is used for obtaining an image to be recognized. The image acquisition module 510 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The detecting module 520 is configured to detect the quality of the image to be identified to output a quality evaluation graph, where a pixel value of a pixel in the quality evaluation graph represents an identification accuracy of a corresponding content of the pixel. The detection module 520 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The identifying module 530 is configured to identify a region of interest in the image to be identified, wherein the region of interest is a sub-region of a region of an object in the image to be identified. The identification module 530 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
For the case where the target is an identification card, the region of interest here may be a text region.
The determining module 540 is configured to determine the identification accuracy of the region of interest according to the quality evaluation graph. The determination module 540 may also be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
Illustratively, the obtaining module 510 is further configured to:
acquiring an original image;
performing target recognition on the original image to determine a target area of the target in the original image; and
and acquiring the image to be identified based on the image of the target area.
Illustratively, the acquiring module 510 acquires the image to be recognized based on the image of the target area by:
determining a transformation matrix according to the positioning information of the target area; and
and transforming the image of the target area into the image to be recognized with a standard shape and a first standard size based on the transformation matrix.
For example, the obtaining module 510 performs target recognition on the original image to determine a target area of the target in the original image by: and carrying out target identification on the original image by using the second CNN so as to determine a target area of the target in the original image.
Illustratively, the second CNN is trained by using a set of second training images, wherein the second training images are marked with positions of targets.
Illustratively, the obtaining module 510 is further configured to, before the target recognition of the original image: scaling the original image to a second standard size.
Illustratively, the detecting module 520 detects the quality of the image to be recognized, and the outputting of the quality evaluation graph is specifically realized by the following steps: and detecting the quality of the image to be identified by using the first CNN so as to output the quality evaluation graph.
Illustratively, the first CNN is obtained by training with a set of first training images, where distortion regions are marked on the first training images, where the distortion regions are regions in which the recognition accuracy of corresponding contents of pixels in the first training images is lower than a preset threshold.
Illustratively, the determining module 540 is specifically configured to:
determining a corresponding region of the region of interest in the quality evaluation graph according to the position corresponding relation between the quality evaluation graph and the image to be identified; and
and determining the identification accuracy of the region of interest according to the pixel values in the corresponding region.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
FIG. 6 shows a schematic block diagram of an object recognition system 600 according to one embodiment of the present invention. As shown in fig. 6, the system 600 includes an input device 610, a storage device 620, a processor 630, and an output device 640. It is to be understood that the target may be various objects to be identified, such as an identification card, a bank card, and a pedestrian.
The input device 610 is used for receiving an operation instruction input by a user and collecting data. The input device 610 may include one or more of a keyboard, a mouse, a microphone, a touch screen, an image capture device, and the like.
The storage means 620 stores computer program instructions for implementing the respective steps in the object recognition method according to an embodiment of the present invention.
The processor 630 is configured to run the computer program instructions stored in the storage 620 to perform the corresponding steps of the object recognition method according to the embodiment of the present invention, and is configured to implement the obtaining module 510, the detecting module 520, the recognizing module 530 and the determining module 540 in the object recognition apparatus according to the embodiment of the present invention.
In one embodiment of the invention, the computer program instructions, when executed by the processor 630, cause the system 600 to perform the steps of:
acquiring an image to be identified;
detecting the quality of the image to be identified to output a quality evaluation graph, wherein the pixel value of a pixel in the quality evaluation graph represents the identification accuracy of the corresponding content of the pixel;
identifying a region of interest in the image to be identified, wherein the region of interest is a sub-region of a region of an object in the image to be identified; and
and determining the identification accuracy of the region of interest according to the quality evaluation graph.
For the case where the target is an identification card, the region of interest here may be a text region.
Illustratively, the step of acquiring an image to be identified, which when the computer program instructions are executed by the processor 630, causes the system 600 to perform, comprises:
acquiring an original image;
performing target recognition on the original image to determine a target area of the target in the original image; and
and acquiring the image to be identified based on the image of the target area.
Illustratively, the step of acquiring the image to be recognized based on the image of the target area, which is executed by the processor 630 when the computer program instructions are executed by the system 600, includes:
determining a transformation matrix according to the positioning information of the target area; and
and transforming the image of the target area into the image to be recognized with a standard shape and a first standard size based on the transformation matrix.
Illustratively, the step of object recognition of the original image to determine an object region of the object in the original image, which when the computer program instructions are executed by the processor 630, causes the system 600 to perform comprises: and carrying out target identification on the original image by using the second CNN so as to determine a target area of the target in the original image.
Illustratively, the second CNN is obtained by training with a set of second training images, where the second training images are labeled with positions of targets.
Illustratively, before the computer program instructions, when executed by the processor 630, cause the system 600 to perform the step of object identifying the original image, the system 600 is further caused to perform the steps of: scaling the original image to a second standard size.
Illustratively, the step of detecting the quality of the image to be recognized to output a quality assessment map, which when the computer program instructions are executed by the processor 630, causes the system 600 to perform, comprises: and detecting the quality of the image to be identified by using the first CNN to output the quality evaluation graph.
Illustratively, the first CNN is obtained by training with a set of first training images, where distortion regions are marked on the first training images, where the distortion regions are regions in which the recognition accuracy of corresponding contents of pixels in the first training images is lower than a preset threshold.
Illustratively, the step of determining the identification accuracy of the region of interest from the quality assessment map, which is performed by the system 600 when the computer program instructions are executed by the processor 630, comprises:
determining a corresponding region of the region of interest in the quality evaluation graph according to the position corresponding relation between the quality evaluation graph and the image to be identified; and
and determining the identification accuracy of the region of interest according to the pixel values in the corresponding region.
Furthermore, according to still another aspect of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor cause the computer or the processor to perform the respective steps of the object recognition method according to the embodiment of the present invention and to implement the respective modules in the object recognition apparatus according to the embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of:
acquiring an image to be identified;
detecting the quality of the image to be identified to output a quality evaluation graph, wherein the pixel value of a pixel in the quality evaluation graph represents the identification accuracy of the corresponding content of the pixel;
identifying a region of interest in the image to be identified, wherein the region of interest is a sub-region of a region of an object in the image to be identified; and
and determining the identification accuracy of the region of interest according to the quality evaluation graph.
For the case where the target is an identification card, the region of interest here may be a text region.
In one embodiment of the invention, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the step of acquiring an image to be identified comprising:
acquiring an original image;
performing target recognition on the original image to determine a target area of the target in the original image; and
and acquiring the image to be identified based on the image of the target area.
Illustratively, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the step of acquiring the image to be recognized based on the image of the target area, including:
determining a transformation matrix according to the positioning information of the target area;
transforming the image of the target area into the image to be recognized having a standard shape and a first standard size based on the transformation matrix; and
determining that the transformed image is the image to be identified.
Illustratively, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the step of object recognition of the original image to determine an object region of the object in the original image comprising: and carrying out target identification on the original image by using the second CNN so as to determine a target area of the target in the original image.
Illustratively, the second CNN is trained by using a set of second training images, wherein the second training images are marked with positions of targets.
Illustratively, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of, prior to the step of performing object recognition on the raw image, further causing the computer or processor to perform the steps of: scaling the original image to a second standard size.
Illustratively, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the step of detecting the quality of the image to be recognized to output a quality evaluation map including: and detecting the quality of the image to be identified by using the first CNN to output the quality evaluation graph.
Illustratively, the first CNN is obtained by training with a set of first training images, where distortion regions are marked on the first training images, where the distortion regions are regions in which the recognition accuracy of corresponding contents of pixels in the first training images is lower than a preset threshold.
Illustratively, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the step of determining the identification accuracy of the region of interest from the quality assessment map comprising:
determining a corresponding region of the region of interest in the quality evaluation graph according to the position corresponding relation between the quality evaluation graph and the image to be identified; and
and determining the identification accuracy of the region of interest according to the pixel values in the corresponding region.
The modules in the object recognition system according to an embodiment of the present invention may be implemented by a processor of an electronic device implementing object recognition according to an embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer-readable storage medium of a computer program product according to an embodiment of the present invention are run by a computer.
According to the target identification method, the target identification device, the target identification system and the storage medium, the target identification is assisted through the quality detection of the image to be identified, and the identification accuracy of the region of interest of the target can be improved. The user can more clearly know the target misidentification condition possibly caused by the quality problem of the image to be identified, so that the reliability and the adaptability of the target identification system are greatly improved. The user experience is significantly improved.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments, not others, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the blocks in an object recognition arrangement according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (12)
1. An object recognition method, comprising:
acquiring an image to be identified;
detecting the quality of the image to be identified to output a quality evaluation graph, wherein the pixel value of a pixel in the quality evaluation graph represents the identification accuracy of the corresponding content of the pixel;
identifying a region of interest in the image to be identified, wherein the region of interest is a sub-region of a region of an object in the image to be identified; and
determining the identification accuracy of the region of interest according to the quality evaluation graph;
wherein the determining the identification accuracy of the region of interest according to the quality evaluation graph comprises:
determining a corresponding area of the region of interest in the quality evaluation chart according to the position corresponding relation between the quality evaluation chart and the image to be identified; and
and determining the identification accuracy of the region of interest according to the pixel values in the corresponding region.
2. The method of claim 1, wherein the acquiring the image to be identified comprises:
acquiring an original image;
performing target recognition on the original image to determine a target area of the target in the original image; and
and acquiring the image to be identified based on the image of the target area.
3. The method of claim 2, wherein the acquiring the image to be identified based on the image of the target region comprises:
determining a transformation matrix according to the positioning information of the target area; and
and transforming the image of the target area into the image to be recognized with a standard shape and a first standard size based on the transformation matrix.
4. The method of claim 2 or 3, wherein the performing object recognition on the original image to determine an object region of the object in the original image comprises:
and carrying out target identification on the original image by utilizing a second convolutional neural network so as to determine a target area of the target in the original image.
5. The method of claim 4, wherein the second convolutional neural network is trained using a set of second training images, wherein the second training images are labeled with the location of the target.
6. The method of claim 2 or 3, wherein prior to said target identifying said original image, said method further comprises:
scaling the original image to a second standard size.
7. The method of any one of claims 1 to 3, wherein the detecting the quality of the image to be recognized to output a quality evaluation map comprises:
and detecting the quality of the image to be identified by utilizing a first convolution neural network so as to output the quality evaluation graph.
8. The method of claim 7, wherein,
the first convolutional neural network is obtained by utilizing a set of first training images for training, wherein distortion regions are marked on the first training images, and the distortion regions are regions in which the recognition accuracy of corresponding contents of pixels in the first training images is lower than a preset threshold value.
9. A method as claimed in any one of claims 1 to 3, wherein the target is an identification card and the region of interest is a text region.
10. An object recognition apparatus comprising:
the acquisition module is used for acquiring an image to be identified;
the detection module is used for detecting the quality of the image to be identified so as to output a quality evaluation graph, wherein the pixel value of a pixel in the quality evaluation graph represents the identification accuracy of the corresponding content of the pixel;
the identification module is used for identifying a region of interest in the image to be identified, wherein the region of interest is a sub-region of a region of an object in the image to be identified; and
the determination module is configured to determine the identification accuracy of the region of interest according to the quality evaluation map, and specifically, to determine a corresponding region of the region of interest in the quality evaluation map according to a position correspondence between the quality evaluation map and the image to be identified, and determine the identification accuracy of the region of interest according to a pixel value in the corresponding region.
11. An object recognition system comprising a processor and a memory, wherein the memory has stored therein computer program instructions for execution by the processor for performing the object recognition method of any one of claims 1 to 9.
12. A storage medium on which are stored program instructions for performing, when executed, the object recognition method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711457414.7A CN108875731B (en) | 2017-12-28 | 2017-12-28 | Target identification method, device, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711457414.7A CN108875731B (en) | 2017-12-28 | 2017-12-28 | Target identification method, device, system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108875731A CN108875731A (en) | 2018-11-23 |
CN108875731B true CN108875731B (en) | 2022-12-09 |
Family
ID=64325611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711457414.7A Active CN108875731B (en) | 2017-12-28 | 2017-12-28 | Target identification method, device, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875731B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636785A (en) * | 2018-12-07 | 2019-04-16 | 南京埃斯顿机器人工程有限公司 | A kind of visual processing method identifying particles of silicon carbide |
CN109783674A (en) * | 2018-12-13 | 2019-05-21 | 平安普惠企业管理有限公司 | Image identification method, device, system, computer equipment and storage medium |
CN109872282B (en) * | 2019-01-16 | 2021-08-06 | 众安信息技术服务有限公司 | Image desensitization method and system based on computer vision |
CN112825123A (en) * | 2019-11-20 | 2021-05-21 | 北京沃东天骏信息技术有限公司 | Character recognition method, system and storage medium |
CN112861589A (en) * | 2019-11-28 | 2021-05-28 | 马上消费金融股份有限公司 | Portrait extraction, quality evaluation, identity verification and model training method and device |
CN111091126A (en) * | 2019-12-12 | 2020-05-01 | 京东数字科技控股有限公司 | Certificate image reflection detection method, device, equipment and storage medium |
CN111126098B (en) * | 2019-12-24 | 2023-11-07 | 京东科技控股股份有限公司 | Certificate image acquisition method, device, equipment and storage medium |
CN111626244B (en) * | 2020-05-29 | 2023-09-12 | 中国工商银行股份有限公司 | Image recognition method, device, electronic equipment and medium |
CN111738147A (en) * | 2020-06-22 | 2020-10-02 | 浙江大华技术股份有限公司 | Article wearing detection method and device, computer equipment and storage medium |
CN112526626B (en) * | 2020-06-29 | 2023-09-12 | 北京清大视觉科技有限公司 | Intelligent detection system and method for emptying state of security inspection tray |
CN113887631A (en) * | 2021-09-30 | 2022-01-04 | 北京百度网讯科技有限公司 | Image data processing method, and training method, device and equipment of target model |
CN115861162B (en) * | 2022-08-26 | 2024-07-26 | 宁德时代新能源科技股份有限公司 | Method, apparatus and computer readable storage medium for locating target area |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567993A (en) * | 2011-12-15 | 2012-07-11 | 中国科学院自动化研究所 | Fingerprint image quality evaluation method based on main component analysis |
US8718365B1 (en) * | 2009-10-29 | 2014-05-06 | Google Inc. | Text recognition for textually sparse images |
CN106650743A (en) * | 2016-09-12 | 2017-05-10 | 北京旷视科技有限公司 | Strong light reflection detection method and device of image |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7711158B2 (en) * | 2004-12-04 | 2010-05-04 | Electronics And Telecommunications Research Institute | Method and apparatus for classifying fingerprint image quality, and fingerprint image recognition system using the same |
JP4875117B2 (en) * | 2009-03-13 | 2012-02-15 | 株式会社東芝 | Image processing device |
WO2010107411A1 (en) * | 2009-03-17 | 2010-09-23 | Utc Fire & Security Corporation | Region-of-interest video quality enhancement for object recognition |
CN104036236B (en) * | 2014-05-27 | 2017-03-29 | 厦门瑞为信息技术有限公司 | A kind of face gender identification method based on multiparameter exponential weighting |
CN104243973B (en) * | 2014-08-28 | 2017-01-11 | 北京邮电大学 | Video perceived quality non-reference objective evaluation method based on areas of interest |
CN104850823B (en) * | 2015-03-26 | 2017-12-22 | 浪潮软件集团有限公司 | Quality evaluation method and device for iris image |
CN105631439B (en) * | 2016-02-18 | 2019-11-08 | 北京旷视科技有限公司 | Face image processing process and device |
CN106067019A (en) * | 2016-05-27 | 2016-11-02 | 北京旷视科技有限公司 | The method and device of Text region is carried out for image |
CN106296665B (en) * | 2016-07-29 | 2019-05-14 | 北京小米移动软件有限公司 | Card image fuzzy detection method and apparatus |
CN106228168B (en) * | 2016-07-29 | 2019-08-16 | 北京小米移动软件有限公司 | The reflective detection method of card image and device |
CN107481238A (en) * | 2017-09-20 | 2017-12-15 | 众安信息技术服务有限公司 | Image quality measure method and device |
-
2017
- 2017-12-28 CN CN201711457414.7A patent/CN108875731B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8718365B1 (en) * | 2009-10-29 | 2014-05-06 | Google Inc. | Text recognition for textually sparse images |
CN102567993A (en) * | 2011-12-15 | 2012-07-11 | 中国科学院自动化研究所 | Fingerprint image quality evaluation method based on main component analysis |
CN106650743A (en) * | 2016-09-12 | 2017-05-10 | 北京旷视科技有限公司 | Strong light reflection detection method and device of image |
Non-Patent Citations (1)
Title |
---|
机器人视觉系统中的物体检测技术研究;毛玉仁;《万方数据知识服务平台》;20170829;正文第35页最后一段、38页第1段 * |
Also Published As
Publication number | Publication date |
---|---|
CN108875731A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875731B (en) | Target identification method, device, system and storage medium | |
CN107403424B (en) | Vehicle loss assessment method and device based on image and electronic equipment | |
CN106650662B (en) | Target object shielding detection method and device | |
CN109961009B (en) | Pedestrian detection method, system, device and storage medium based on deep learning | |
CN106599772B (en) | Living body verification method and device and identity authentication method and device | |
CN109086691B (en) | Three-dimensional face living body detection method, face authentication and identification method and device | |
CN108875723B (en) | Object detection method, device and system and storage medium | |
CN108875534B (en) | Face recognition method, device, system and computer storage medium | |
CN108932456B (en) | Face recognition method, device and system and storage medium | |
CN108875535B (en) | Image detection method, device and system and storage medium | |
CN108734185B (en) | Image verification method and device | |
CN111914775B (en) | Living body detection method, living body detection device, electronic equipment and storage medium | |
CN110008997B (en) | Image texture similarity recognition method, device and computer readable storage medium | |
US11568665B2 (en) | Method and apparatus for recognizing ID card | |
CN109241888B (en) | Neural network training and object recognition method, device and system and storage medium | |
US9679218B2 (en) | Method and apparatus for image matching | |
CN106650743B (en) | Image strong reflection detection method and device | |
CN108875556B (en) | Method, apparatus, system and computer storage medium for testimony of a witness verification | |
CN110263805B (en) | Certificate verification and identity verification method, device and equipment | |
CN111626163A (en) | Human face living body detection method and device and computer equipment | |
CN111008935A (en) | Face image enhancement method, device, system and storage medium | |
CN111680680B (en) | Target code positioning method and device, electronic equipment and storage medium | |
CN109948521A (en) | Image correcting error method and device, equipment and storage medium | |
CN111626295A (en) | Training method and device for license plate detection model | |
US10909227B2 (en) | Certificate verification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |