CN114819099A - System for improving image recognition accuracy rate based on intelligent AI - Google Patents

System for improving image recognition accuracy rate based on intelligent AI Download PDF

Info

Publication number
CN114819099A
CN114819099A CN202210536370.1A CN202210536370A CN114819099A CN 114819099 A CN114819099 A CN 114819099A CN 202210536370 A CN202210536370 A CN 202210536370A CN 114819099 A CN114819099 A CN 114819099A
Authority
CN
China
Prior art keywords
image
sets
target area
execution unit
weight distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210536370.1A
Other languages
Chinese (zh)
Inventor
袁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Fulirong Technology Development Co ltd
Original Assignee
Nantong Fulirong Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Fulirong Technology Development Co ltd filed Critical Nantong Fulirong Technology Development Co ltd
Priority to CN202210536370.1A priority Critical patent/CN114819099A/en
Publication of CN114819099A publication Critical patent/CN114819099A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system for improving image recognition accuracy based on intelligent AI, which relates to the field of image processing, wherein the system comprises: the first execution unit is used for acquiring and acquiring a first target area image set; a first obtaining unit, configured to obtain a plurality of second target area image sets; the second execution unit is used for obtaining a plurality of first image sets and a plurality of second image sets; a second obtaining unit configured to obtain a first weight assignment result; the third execution unit is used for obtaining a plurality of first output results and a plurality of second output results; and the fourth execution unit is used for performing weighting adjustment on the plurality of first output results according to the first weight distribution result and obtaining a first target identification result by combining the plurality of second output results. The technical problem that the image recognition accuracy is low due to the fact that the image recognition effect is poor in the prior art is solved.

Description

System for improving image recognition accuracy rate based on intelligent AI
Technical Field
The invention relates to the field of image processing, in particular to a system for improving image identification accuracy based on intelligent AI.
Background
An image is information related to vision and closest to life, and is an entity in which an object in the objective world directly or indirectly acts on human eyes to generate vision. The conventional image processing technology is to save, process, compress, transmit and reproduce an image. With the advent of the information age, image processing technology has developed rapidly, and the generation and development of image recognition technology have been promoted. Image recognition refers to a technique of processing, analyzing, and understanding an image with a computer to recognize various different patterns of objects and objects. The image recognition technology is widely applied to various fields such as face recognition, fingerprint recognition, medical diagnosis, aerospace and the like, and plays a great role. Meanwhile, Artificial Intelligence (AI) is also an important product in the information age, and the development of science and technology is influenced by the day-to-day and-night Artificial Intelligence (AI) technology. The image recognition and Artificial Intelligence (AI) are combined, a system for optimizing the image recognition accuracy rate is researched and designed, and the method has important practical significance.
In the prior art, the image recognition effect is poor, and the image recognition accuracy is low.
Disclosure of Invention
The application provides a system for improving image recognition accuracy based on intelligent AI, which solves the technical problems that in the prior art, the effect for image recognition is poor, and the image recognition accuracy is not high.
In view of the foregoing, the present application provides a system for improving image recognition accuracy based on intelligent AI.
In a first aspect of the present application, a system for improving accuracy of image recognition based on intelligent AI is provided, the system including: the first execution unit is used for acquiring and acquiring a first target area image set, wherein the target area is an area where a first target is located; a first obtaining unit, configured to generate and obtain multiple second target area image sets according to the first target area image set by using a generative confrontation network model; a second execution unit, configured to pre-process images in the first target area image set and the plurality of second target area image sets to obtain a plurality of first image sets and a plurality of second image sets; a second obtaining unit, configured to perform weight distribution according to the color level variation degree of the images in the plurality of first image sets, to obtain a first weight distribution result; a third execution unit, configured to input the plurality of first image sets and the plurality of second image sets into an image recognition analysis model to obtain a plurality of first output results and a plurality of second output results; and the fourth execution unit is used for carrying out weighting adjustment on the plurality of first output results according to the first weight distribution result and obtaining a first target identification result by combining the plurality of second output results.
In a second aspect of the present application, a method for improving image recognition accuracy based on intelligent AI is provided, the method including: acquiring a first target area image set, wherein the target area is an area where a first target is located; generating and obtaining a plurality of second target area image sets according to the first target area image set by adopting a generative confrontation network model;
preprocessing images in the first target area image set and the second target area image sets to obtain a plurality of first image sets and a plurality of second image sets; according to the color level change degree of the images in the plurality of first image sets, carrying out weight distribution to obtain a first weight distribution result; inputting the plurality of first image sets and the plurality of second image sets into an image recognition analysis model to obtain a plurality of first output results and a plurality of second output results; and performing weighting adjustment on the plurality of first output results according to the first weight distribution result, and combining the plurality of second output results to obtain a first target identification result.
In a third aspect of the present application, a system for improving image recognition accuracy based on intelligent AI is provided, including: a processor coupled to a memory for storing a program that, when executed by the processor, causes a system to perform the system of the first aspect.
In a fourth aspect of the present application, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements the system according to the first aspect.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
acquiring a first target area image set; obtaining a plurality of second target area image sets by utilizing the generative confrontation network model; preprocessing images in the first target area image set and the second target area image sets to obtain a plurality of first image sets and a plurality of second image sets; obtaining a first weight distribution result according to the color level change degree of the images in the plurality of first image sets; inputting the plurality of first image sets and the plurality of second image sets into an image recognition analysis model to obtain a plurality of first output results and a plurality of second output results; and based on the first target identification result, performing weighting adjustment to obtain a first target identification result. The technical scheme provided by the embodiment of the application achieves the purposes of improving the image recognition effect and quality and enhancing the accuracy and precision of the image recognition; meanwhile, the cost of image recognition is reduced, and waste of resources such as manpower and material resources is avoided; and the technical effect of laying a foundation for further development of the image recognition technology is achieved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only exemplary, and for those skilled in the art, other drawings can be obtained according to the provided drawings without inventive effort.
FIG. 1 is a schematic flow chart illustrating a method for improving image recognition accuracy based on intelligent AI according to the present application;
FIG. 2 is a schematic flow chart illustrating a first weight distribution result obtained in the method for improving image recognition accuracy based on intelligent AI according to the present application;
FIG. 3 is a schematic structural diagram of a system for improving image recognition accuracy based on intelligent AI according to the present application;
fig. 4 is a schematic structural diagram of an exemplary electronic device of the present application.
Description of reference numerals: the system comprises a first execution unit 11, a first obtaining unit 12, a second execution unit 13, a second obtaining unit 14, a third execution unit 15, a fourth execution unit 16, an electronic device 300, a memory 301, a processor 302, a communication interface 303 and a bus architecture 304.
Detailed Description
The application provides a system for improving the image recognition accuracy based on intelligent AI, and solves the technical problems that the effect of image recognition in the prior art is poor, and the image recognition accuracy is not high. The effect and the quality of image recognition are improved, and the accuracy and the precision of the image recognition are enhanced; meanwhile, the cost of image recognition is reduced, and waste of resources such as manpower and material resources is avoided; and the technical effect of laying a foundation for further development of the image recognition technology is achieved.
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
According to the technical scheme, the data acquisition, storage, use, processing and the like meet relevant regulations of national laws and regulations.
Images are visually relevant and closest to life information. It is an entity in which an object in the objective world acts directly or indirectly on the human eye to produce vision. The conventional image processing technology is to save, process, compress, transmit and reproduce an image. With the advent of the information age, image processing technology has developed rapidly, and the generation and development of image recognition technology have been promoted. Image recognition refers to a technique of processing, analyzing, and understanding an image with a computer to recognize various different patterns of objects and objects. The image recognition technology is widely applied to various fields such as face recognition, fingerprint recognition, medical diagnosis, aerospace and the like, and plays a great role. Meanwhile, Artificial Intelligence (AI) is also an important product in the information age, and the development of science and technology is influenced by the day-to-day and-night Artificial Intelligence (AI) technology. The system for optimizing the image recognition accuracy rate is researched and designed by combining the image recognition with Artificial Intelligence (AI), and has important practical significance. In the prior art, the image recognition effect is poor, and the image recognition accuracy is low.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the system provided by the application acquires a first target area image set through collection; obtaining a plurality of second target area image sets by utilizing the generative confrontation network model; preprocessing images in the first target area image set and the second target area image sets to obtain a plurality of first image sets and a plurality of second image sets; obtaining a first weight distribution result according to the color gradation change degree of the images in the plurality of first image sets; inputting the plurality of first image sets and the plurality of second image sets into an image recognition analysis model to obtain a plurality of first output results and a plurality of second output results; and based on the first target identification result, carrying out weighting adjustment to obtain the first target identification result.
For better understanding of the above technical solutions, the following detailed descriptions will be provided in conjunction with the drawings and the detailed description of the embodiments.
Example one
Referring to fig. 1, the present application provides a method for improving image recognition accuracy based on intelligent AI, which specifically includes the following steps:
s100: acquiring a first target area image set, wherein the target area is an area where a first target is located;
specifically, the method for improving the image recognition accuracy based on the intelligent AI is applied to a system for improving the image recognition accuracy based on the intelligent AI. The system for improving the image recognition accuracy based on the intelligent AI obtains a first target area image set through modes of big data acquisition and the like. Wherein the first target comprises any object, such as a person, a train, an airplane, an animal, etc., which is subjected to image recognition by using the system for improving image recognition accuracy based on intelligent AI. The target area is an area where a first target is located, e.g. a park, a road, a school, a casino, etc. The first target area image set comprises all image information which is collected by the first target in the area where the first target is located and takes time as a sequence. Illustratively, the first target is a vehicle. The first target area is road a on which the vehicle is traveling. The first target area image set comprises all image data information which is shot in time sequence when a vehicle runs on the road A. The technical effects of obtaining the first target area image set and providing data support for the subsequent image recognition process are achieved.
S200: generating and obtaining a plurality of second target area image sets according to the first target area image set by adopting a generative confrontation network model;
further, step S200 of the present application further includes:
s210: constructing and training to obtain the generative confrontation network model;
further, step S210 of the present application further includes:
s211: collecting a third target area image set of the first target in the target area;
s212: obtaining a training sample and a test sample according to the third target area image set;
s213: constructing a generation model and a discrimination model;
s214: inputting the training sample into the generating model, and superposing noise to generate and obtain a confrontation sample;
s215: inputting the confrontation sample into the discriminant model, and distinguishing the confrontation sample from the training sample;
s216: updating parameters of the generated model and the discrimination model, and iterating;
s217: until the discriminant model can not discriminate the generated confrontation sample and the training sample, obtaining the generated confrontation network model.
Specifically, the system for improving the image recognition accuracy based on the intelligent AI is used for collecting all image data information of the first target in the target area, specifically collecting the image data information in the target area in the previous history, and also collecting the image data information in the target area again to serve as the construction data of the generated confrontation network model. The plurality of image data information constitute a third target area image set.
And dividing the data in the third target area image set into a training sample and a testing sample, and constructing a generation model and a discrimination model by using the training sample and the testing sample. Based on a generative confrontation network in the prior art, a generative model and a discriminant model are constructed, and the generative model and the discriminant model are trained by adopting the training sample and the test sample.
And inputting the training sample as input information into the generating model, and outputting a confrontation sample. The generation model continuously learns the probability distribution of real data in the training sample, converts input random noise into data which can be falsified, and outputs generated image data, namely the confrontation sample, by combining the input training sample. The more similar the challenge sample generated is to the data in the training sample, the better. Furthermore, the obtained confrontation sample is used as input information and is input into the discriminant model. The discriminant model may determine whether the data is real data or data generated for the generative model with the goal of distinguishing the confrontational sample (false sample) from the training sample (true sample) generated by the generative model. Through the confrontation relationship, the identification model and the discrimination model are subjected to parameter updating and are continuously subjected to iterative optimization. And when the generated confrontation sample and the training sample cannot be distinguished by the distinguishing model, obtaining a generating type confrontation network model. The test sample can be used for testing performance parameters such as the fitting degree and the balance state of the generation model and the discrimination model. Meanwhile, the test sample can judge the performance of the generation model and the discrimination model, and the accuracy of the constructed generation type confrontation network model is further improved.
The generative model and the discriminative model are included in the generative confrontation network model. The generative model and the discriminant model are in a confrontational relationship, the generative model is to generate samples which cause the discriminant model to fail as much as possible, and the discriminant model is to identify false samples of the generative model as much as possible. The generative confrontation network enables the generative model and the discriminant model to play games through the confrontation relation. Meanwhile, in the training process, the generated model and the discriminant model are enhanced and continuously optimized through mutual competition, and then the generated model and the discriminant model with higher accuracy are obtained. The generative countermeasure network has the advantages of generating more accurate sample data, wide application range, simplicity, time saving and the like. The method achieves the technical effects of constructing a generative confrontation network model with higher accuracy and reliability by utilizing the generative model and the discriminant model and laying a foundation for subsequently obtaining accurate second target area image sets.
S220: dividing the images in the first target area image set according to a time sequence to obtain a first image set;
s230: inputting images in the first image set into the generative confrontation network model, generating and obtaining a plurality of images according to each image, and obtaining a plurality of second image sets;
s240: and sequencing the images in the second image sets according to the time sequence to obtain a plurality of second target area image sets.
Specifically, on the basis of obtaining the first target area image set, images in the first target area image set are divided in a time sequence to obtain a first image set. The first image set is any image set obtained by dividing images in the first target area image set according to a time sequence. Further, the generated countermeasure network model obtained is input with the images in the first image set as input information, a plurality of new images are generated, and a plurality of second image sets are output. And then, sequencing the images in the second image sets by using the time sequence to obtain the second target area image sets. The plurality of second image sets are obtained by carrying out intelligent processing such as image data dimension promotion on each image in the first image set through the generative confrontation network model. The plurality of second target area image sets are image sets obtained by sequencing a plurality of second image sets obtained by the generative confrontation network model in time sequence. And the plurality of second target area image sets have image data information similar to and different from the first target area image set. The technical effects that a plurality of second target area image sets are obtained by utilizing the generating type confrontation network model, the data volume is increased, and a foundation is laid for the subsequent pretreatment of the second target area image sets are achieved.
S300: preprocessing images in the first target area image set and the second target area image sets to obtain a plurality of first image sets and a plurality of second image sets;
further, step S300 of the present application further includes:
s310: respectively extracting images of a first preset frame number in the first target area image set and the second target area image sets to obtain a first preprocessed image set;
s320: performing image quality enhancement processing on the first pre-processed image set to obtain the first image set;
s330: respectively extracting images of a second preset frame number in the first target area image set and the plurality of second target area image sets to obtain a second pre-processing image set, wherein the second preset frame number is greater than the first preset frame number;
s340: and performing image quality reduction processing on the second pre-processed image set to obtain the second image set.
Specifically, on the basis of obtaining the first target area image set and the plurality of second target area image sets, after comprehensive analysis and intelligent processing are performed on an image recognition process by the system for improving the image recognition accuracy based on the intelligent AI, a first preset frame number and a second preset frame number are preset and determined. The first preset frame number and the second preset frame number are both any frame number, but the second preset frame number is larger than the first preset frame number. For example, the first preset frame number is 4 frames, and the second preset frame number is 15 frames.
And then, extracting the images with a first preset frame number in the first target area image set and the second target area image sets to obtain a first pre-processed image set, and performing image quality enhancement processing on the first pre-processed image set to obtain the first image set. The first image set is any one of the image sets obtained by performing image quality enhancement processing such as overall processing, definition enhancement, rich image quality and the like on the first pre-processed image set. The image quality enhancement processing has the advantages of improving the visual effect of the image, improving the definition of image components, enabling the image to be more beneficial to computer processing, enhancing the image judgment and identification effects and the like. Further, based on the second preset frame number, extracting the images in the first target area image set and the plurality of second target area image sets again to obtain a second preprocessed image set, and performing image quality reduction processing on the second preprocessed image set to obtain the second image set. The second image set is obtained by compressing and reducing the image quality of the second preprocessed image set, such as resolution reduction. The image quality reduction processing has the advantages of compressing image data, saving space, improving image transmission speed and the like. The image in the first target area image set and the images in the second target area image sets are preprocessed based on image quality enhancement processing and image quality reduction processing, so that the first image sets and the second image sets are obtained, and a basic technical effect is laid for a subsequent image processing process.
S400: according to the color level change degree of the images in the plurality of first image sets, carrying out weight distribution to obtain a first weight distribution result;
further, as shown in fig. 2, step S400 of the present application further includes:
s410: performing graying processing on the images in the plurality of first image sets to obtain a plurality of first grayscale image sets;
s420: extracting gray scale values in the plurality of first gray scale image sets to obtain a plurality of gray scale distribution sets;
s430: respectively obtaining the difference between the maximum gray value and the minimum gray value in the gray scale distribution sets to obtain a plurality of gray differences;
s440: carrying out gray scale value clustering according to the plurality of gray scale distribution sets respectively to obtain a plurality of gray scale clustering sets;
s450: and carrying out weight distribution according to the gray differences and the gray clustering sets to obtain the first weight distribution result.
Specifically, after a plurality of first video sets are obtained, a graying process is performed on the first video sets to obtain a plurality of first grayscale video sets. The graying processing is a process of converting a color image into a grayscale image. The color of each pixel in the color image is determined by R, G, B three components, and each component has 255 steps, so that a pixel point can have a variation range of 1600 tens of thousands (255 x 255) of colors. The gray image is a special color image with R, G, B components identical, and the variation range of one pixel point is 0 to 225. The grayed first image sets only contain brightness information and do not contain color information. The obtained multiple first gray scale image sets remove the color information of the image and only leave the brightness information, and meanwhile, the multiple first gray scale image sets still reflect the whole and local chromaticity and brightness information in the multiple first image sets. The graying can be performed by a maximum method, an average method, a weighted average method, or the like. The gray processing has the advantages of reducing the original data volume of the image, facilitating the analysis and calculation of the image and the like.
Further, extracting gray scale values of the obtained first gray scale image sets to obtain a plurality of gray scale distribution sets. The change range of the gray scale value is 0-225, and is the same as the change range of the pixel point of the gray image. The gray scale distribution sets can represent the change data information of the gray scale values of the pixel points in the first gray scale image sets. And then, sequentially extracting the maximum gray value and the minimum gray value in the gray scale distribution sets, and calculating the maximum gray value and the minimum gray value to obtain a plurality of gray differences.
Further, performing gray scale value clustering on the plurality of gray scale distribution sets by using cluster analysis to obtain a plurality of gray scale clustering sets, and performing weight distribution based on the plurality of gray scale clustering sets and the plurality of gray scale differences to obtain the first weight distribution result. In the cluster analysis, similar research objects are classified into classes when facing more complex research objects, so that the homogeneity of the objects between the classes is maximized or the heterogeneity of the objects between the classes is maximized, namely, individuals in the same class have larger similarity, and the differences among the individuals in different classes are large. The clustering analysis method comprises various methods such as a system clustering method, a fuzzy clustering method, a dynamic clustering method and the like. The cluster analysis has the advantages of classifying the research objects, reducing the number of the research objects, being simple and quick, being easy to understand and realize, and the like. The gray level clustering sets are sets obtained by clustering data information in the gray level distribution sets according to certain gray level values. The data information in the same gray level cluster set has the same gray level value range, and the gray level value ranges of different gray level cluster sets are different. Illustratively, the data information with the gray scale value range between 100 and 150 in the plurality of gray scale distribution sets is clustered, and a gray cluster set is obtained. If the data information in a certain gray scale value range is less, clustering is not performed. The more the clustering results are, the more uniformly the clustering results are distributed in the images, the greater the color change degree of the images is, the more the target position can be obtained by identification, and then the larger weight is set.
In the plurality of gray differences, the larger the gray difference is, the larger the color contrast of the image is, which is beneficial to identifying the target and the target position, and the larger the weight is set, otherwise, the smaller the weight is set.
In the process of obtaining the first weight distribution result, optionally, two times of weight distribution are performed according to the magnitudes of the gray differences and the gray cluster sets, and a specific weight distribution process may use any weight distribution method in the prior art, such as an expert method, to complete weight distribution, so as to obtain the first weight distribution result. The technical effect of obtaining the first weight distribution result with higher accuracy and adaptability by processing the plurality of first image sets by means of graying processing, weight distribution and the like is achieved.
Further, step S450 of the present application further includes:
s451: respectively carrying out weight distribution according to the difference values of the gray differences to obtain a second weight distribution result;
s452: respectively carrying out weight distribution according to the number of the clustering results in the multiple gray level clustering sets and the gray level difference among the clustering results to obtain a third weight distribution result;
s453: and performing weighting processing on the second weight distribution result according to the third weight distribution result to obtain the first weight distribution result.
Specifically, after the plurality of gray scale differences and the plurality of gray scale cluster sets are obtained, weight assignment is performed using the difference value of the plurality of gray scale differences, and a second weight assignment result is obtained. For example, if the largest difference value among the gray differences is b, the weight value corresponding to b in the second weight distribution result is obtained to be higher, and the weight values corresponding to other gray differences among the gray differences are lower. And further, calculating the number of clustering results in the multiple gray level clustering sets and the gray level difference between the clustering results, and performing weight distribution according to the number of clustering results and the gray level difference to obtain a third weight distribution result, wherein the specific weight distribution process is similar to the weight distribution process. Further, according to each weight value in the third weight distribution result, the second weight distribution result is weighted and redistributed so that the sum of the weight values is 1, and the first weight distribution result is obtained. The first weight distribution result comprises a plurality of gray differences of the plurality of first gray image sets, and weighting is carried out according to the number of clustering results in the plurality of gray clustering sets and the gray differences among the clustering results. For example, if the weight value corresponding to the number of the clustering results in the multiple gray-scale clustering sets in the third weight distribution result is higher, the weighted second weight distribution result, that is, the weight value corresponding to the number of the clustering results in the multiple gray-scale clustering sets in the first weight distribution result, is also higher. The technical effects that the second weight distribution result is weighted by the third weight distribution result, the first weight distribution result with higher accuracy is obtained, and the accuracy and the reasonability of the first target recognition result are improved are achieved.
S500: inputting the plurality of first image sets and the plurality of second image sets into an image recognition analysis model to obtain a plurality of first output results and a plurality of second output results;
further, step S500 of the present application further includes:
s510: constructing the image recognition analysis model, wherein the image recognition analysis model comprises a first recognition channel and a second recognition channel;
s520: inputting the plurality of first image sets into the first identification channel to obtain a plurality of first identification results;
s530: and inputting the plurality of second image sets into the second identification channel to obtain a plurality of second identification results.
Specifically, the image recognition analysis model is a SLOW FAST neural network model, which comprises a SLOW channel and a FAST channel, and is a convolutional neural network, and comprises an input layer, a convolutional layer, a pooling layer and a full-link layer. Preferably, the first recognition channel used in the present application is a SLOW channel, and the second recognition channel is a FAST channel. And inputting the plurality of first image sets as input information into the first identification channel, wherein the first identification channel operates at a low frame rate and a low time resolution, learns spatial semantic information, analyzes static content, can identify the static content in the image with the low frame rate and high image quality, further analyzes the position of a target, and outputs a plurality of first identification results, and the plurality of first identification results represent the spatial information in the plurality of first image sets. And inputting the plurality of second image sets as input information into the second identification channel, wherein the second identification channel operates at a high frame rate and a high time resolution, analyzes dynamic content, captures rapidly-changing motion information, such as a moving track of a capture target, and outputs a plurality of second identification results, and the second identification results represent the change information of the plurality of second image sets in a time dimension. The second identification channel has the advantages of small channel number, light weight and the like. The image recognition analysis model utilizes the SLOW channel and the FAST channel to process input information from the space dimension and the time dimension respectively, and the accuracy of an image recognition result can be further improved. The technical effect of improving the accuracy of the plurality of first output results and the plurality of second output results by utilizing the image recognition analysis model is achieved.
S600: and performing weighting adjustment on the plurality of first output results according to the first weight distribution result, and combining the plurality of second output results to obtain a first target identification result.
Specifically, on the basis of obtaining the plurality of first recognition results and the plurality of second recognition results, the plurality of first output results are weighted and adjusted by using the first weight distribution result, and the plurality of second output results are combined to obtain a first target recognition result. The first target recognition result can represent data information such as the position of the first target. Illustratively, a certain first output result reflects that the first target is located at a certain position D, and a weight value of the first image set corresponding to the first output result in the first weight distribution result is D; and a certain second output result reflects that the first target is positioned at a certain position E, the weight of the first target is E, the second output result is weighted, and finally the position with the maximum probability of the target is obtained and is used as the output result of the position, namely the first target identification result. The second output results comprise the moving track of the target, for example, the target is moved away from the target area from south to north, so that the first target identification result is obtained, the effect and the quality of image identification are improved, and the accuracy and the precision of the image identification are enhanced; meanwhile, the cost of image recognition is reduced, and waste of resources such as manpower and material resources is avoided; and the technical effect of laying a foundation for further development of the image recognition technology is achieved.
In summary, the method for improving the image recognition accuracy based on the intelligent AI provided by the present application has the following technical effects:
1. acquiring a first target area image set through acquisition; obtaining a plurality of second target area image sets by using the generative confrontation network model; preprocessing images in the first target area image set and the second target area image sets to obtain a plurality of first image sets and a plurality of second image sets; obtaining a first weight distribution result according to the color level change degree of the images in the plurality of first image sets; inputting the plurality of first image sets and the plurality of second image sets into an image recognition analysis model to obtain a plurality of first output results and a plurality of second output results; and based on the first target identification result, carrying out weighting adjustment to obtain the first target identification result. The effect and the quality of image recognition are improved, and the accuracy and the precision of the image recognition are enhanced; meanwhile, the cost of image recognition is reduced, and waste of resources such as manpower and material resources is avoided; and the technical effect of laying a foundation for further development of the image recognition technology is achieved.
2. The generative model and the discriminant model in the generative confrontation network model are in confrontation relationship, the generative model is required to generate samples which cause the discriminant model to fail as much as possible, and the discriminant model is required to identify false samples of the generative model as much as possible. The generative confrontation network enables the generative model and the discriminant model to play games through the confrontation relation. Meanwhile, in the training process, the generated model and the discriminant model are enhanced and continuously optimized through mutual competition, and then the generated model and the discriminant model with higher accuracy are obtained. The generative countermeasure network has the advantages of generating more accurate sample data, wide application range, simplicity, time saving and the like. The method achieves the technical effects of constructing a generative confrontation network model with higher accuracy and reliability by utilizing the generative model and the discriminant model and laying a foundation for subsequently obtaining accurate second target area image sets.
3. The graying processing refers to a process of converting a color image into a grayscale image. The grayed first image sets only contain brightness information and do not contain color information. The obtained multiple first gray scale image sets remove the color information of the image and only leave the brightness information, and meanwhile, the multiple first gray scale image sets still reflect the whole and local chromaticity and brightness information in the multiple first image sets. The graying can be performed by a maximum method, an average method, a weighted average method, or the like. The gray processing has the advantages of reducing the original data volume of the image, facilitating the analysis and calculation of the image and the like.
4. The image recognition analysis model is a SLOWFAST neural network model and comprises a SLOW channel and a FAST channel. Preferably, the first recognition channel used in the present application is a SLOW channel, and the second recognition channel is a FAST channel. And inputting the plurality of first image sets as input information into the first identification channel, wherein the first identification channel operates at a low frame rate and a low time resolution, learns spatial semantic information, analyzes static content, and outputs a plurality of first identification results, and the plurality of first identification results represent the spatial information in the plurality of first image sets. And inputting the plurality of second image sets as input information into the second identification channel, wherein the second identification channel operates at a high frame rate and a high time resolution, analyzes dynamic content, captures rapidly-changing action information, and outputs a plurality of second identification results, and the second identification results represent the change information of the plurality of second image sets in a time dimension. The second identification channel has the advantages of small channel number, light weight and the like. The image recognition analysis model utilizes the SLOW channel and the FAST channel to process input information from the space dimension and the time dimension respectively, and the accuracy of an image recognition result can be further improved.
Example two
Based on the same inventive concept as the method for improving the image recognition accuracy based on the intelligent AI in the foregoing embodiment, the present invention further provides a system for improving the image recognition accuracy based on the intelligent AI, referring to fig. 3, where the system includes:
the first execution unit 11 is configured to acquire and obtain a first target area image set, where the target area is an area where a first target is located;
a first obtaining unit 12, where the first obtaining unit 12 is configured to generate and obtain a plurality of second target area image sets according to the first target area image set by using a generative confrontation network model;
a second execution unit 13, where the second execution unit 13 is configured to pre-process images in the first target area image set and the plurality of second target area image sets to obtain a plurality of first image sets and a plurality of second image sets;
a second obtaining unit 14, where the second obtaining unit 14 is configured to perform weight distribution according to the color level variation degree of the images in the plurality of first image sets to obtain a first weight distribution result;
a third executing unit 15, where the third executing unit 15 is configured to input the plurality of first image sets and the plurality of second image sets into an image recognition analysis model, and obtain a plurality of first output results and a plurality of second output results;
a fourth executing unit 16, where the fourth executing unit 16 is configured to perform weighting adjustment on the multiple first output results according to the first weight distribution result, and obtain a first target identification result by combining the multiple second output results.
Further, the system further comprises:
a third obtaining unit, configured to construct and train to obtain the generative confrontation network model;
a fourth obtaining unit, configured to divide the images in the first target area image set according to a time sequence to obtain a first image set;
a fifth obtaining unit, configured to input the images in the first image set into the generative confrontation network model, generate and obtain a plurality of images according to each image, and obtain a plurality of second image sets;
a sixth obtaining unit, configured to sort, according to the time sequence, the images in the plurality of second image sets to obtain the plurality of second target area image sets.
Further, the system further comprises:
a fifth execution unit, configured to acquire a third target area image set of the first target located in the target area;
a seventh obtaining unit, configured to obtain a training sample and a test sample according to the third target area image set;
a sixth execution unit, configured to construct a generation model and a discrimination model;
an eighth obtaining unit, configured to input the training sample into the generation model, superimpose noise, and generate an acquired confrontation sample;
a seventh execution unit, configured to input the confrontation sample into the discriminant model, and distinguish the confrontation sample from the training sample;
the eighth execution unit is used for updating parameters of the generation model and the discrimination model and performing iteration;
a ninth execution unit, configured to obtain the generated confrontation network model until the generated confrontation sample and the training sample cannot be distinguished by the distinguishing model.
Further, the system further comprises:
a tenth execution unit, configured to extract images of a first preset frame number in the first target area image set and the plurality of second target area image sets respectively to obtain a first preprocessed image set;
an eleventh execution unit, configured to perform image quality enhancement processing on the first preprocessed image set to obtain the first image set;
a twelfth execution unit, configured to extract images with a second preset frame number in the first target area image set and the second target area image sets respectively to obtain a second preprocessed image set, where the second preset frame number is greater than the first preset frame number;
a thirteenth execution unit, configured to perform image quality reduction processing on the second preprocessed image set to obtain the second image set.
Further, the system further comprises:
a fourteenth execution unit, configured to perform graying processing on the images in the plurality of first image sets to obtain a plurality of first grayscale image sets;
a fifteenth executing unit, configured to extract and obtain grayscale values in the multiple first grayscale image sets, so as to obtain multiple grayscale distribution sets;
a sixteenth execution unit, configured to obtain differences between a maximum grayscale value and a minimum grayscale value in the multiple grayscale and grayscale distribution sets, respectively, to obtain multiple grayscale differences;
a ninth obtaining unit, configured to perform gray scale value clustering according to the multiple gray scale distribution sets, respectively, to obtain multiple gray scale clustering sets;
a seventeenth executing unit, configured to perform weight assignment according to the multiple grayscale differences and the multiple grayscale clustering sets, to obtain the first weight assignment result.
Further, the system further comprises:
a tenth obtaining unit, configured to perform weight distribution according to difference values of the multiple gray differences, respectively, to obtain a second weight distribution result;
an eighteenth execution unit, configured to perform weight distribution according to the number of clustering results in the multiple grayscale clustering sets and a grayscale difference between the clustering results, respectively, to obtain a third weight distribution result;
a nineteenth execution unit, configured to perform weighting processing on the second weight distribution result according to the third weight distribution result, to obtain the first weight distribution result.
Further, the system further comprises:
a twentieth execution unit, configured to construct the image recognition analysis model, where the image recognition analysis model includes a first recognition channel and a second recognition channel;
an eleventh obtaining unit, configured to input the plurality of first image sets into the first recognition channel, and obtain the plurality of first recognition results;
a twelfth obtaining unit, configured to input the plurality of second image sets into the second recognition channel, and obtain the plurality of second recognition results.
In the present description, each embodiment is described in a progressive manner, and each embodiment focuses on a difference from other embodiments, and the method for improving the image recognition accuracy based on the intelligent AI in the first embodiment of fig. 1 and the specific example are also applicable to the system for improving the image recognition accuracy based on the intelligent AI in this embodiment. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Exemplary electronic device
The electronic device of the present application is described below with reference to fig. 4.
Based on the same inventive concept as the method for improving the image recognition accuracy based on the intelligent AI in the foregoing embodiments, the present application also provides a system for improving the image recognition accuracy based on the intelligent AI, including: a processor coupled to a memory, the memory for storing a program that, when executed by the processor, causes a system to perform the method of any of the first aspects.
The electronic device 300 includes: processor 302, communication interface 303, memory 301. Optionally, the electronic device 300 may also include a bus architecture 304. Wherein, the communication interface 303, the processor 302 and the memory 301 may be connected to each other through a bus architecture 304; the bus architecture 304 may be a peripheral component interconnect standard bus or an extended industry standard architecture bus, or the like. The bus architecture 304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Processor 302 may be a CPU, microprocessor, ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the teachings of the present application. Communication interface 303, using any transceiver or the like, is used for communicating with other devices or communication networks, such as ethernet, wireless access networks, wireless local area networks, wired access networks, and the like. The memory 301 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read only memory, a read only optical disk or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor through a bus architecture 304. The memory may also be integral to the processor.
The memory 301 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 302 to execute. The processor 302 is configured to execute the computer-executable instructions stored in the memory 301, so as to implement a system for improving image recognition accuracy based on intelligent AI provided by the present application.
Alternatively, the computer executable instructions may also be referred to as application code, and the application is not limited thereto.
The image recognition method and device solve the technical problem that in the prior art, the effect for image recognition is poor, and then the image recognition accuracy is not high. The effect and the quality of image recognition are improved, and the accuracy and the precision of the image recognition are enhanced; meanwhile, the cost of image recognition is reduced, and waste of resources such as manpower and material resources is avoided; and the technical effect of laying a foundation for further development of the image recognition technology is achieved.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are for convenience of description and are not intended to limit the scope of this application nor to indicate the order of precedence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium, an optical medium, a semiconductor medium, or the like.
The various illustrative logical units and circuits described in this application may be implemented or operated through the design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in this application may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be disposed in a terminal. In the alternative, the processor and the storage medium may reside in different components within the terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application.
Accordingly, the specification and figures are merely exemplary of the application and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and its equivalent technology, it is intended that the present application include such modifications and variations.

Claims (10)

1. A system for improving image recognition accuracy based on intelligent AI, the system comprising:
the first execution unit is used for acquiring and acquiring a first target area image set, wherein the target area is an area where a first target is located;
a first obtaining unit, configured to generate and obtain multiple second target area image sets according to the first target area image set by using a generative confrontation network model;
a second execution unit, configured to pre-process images in the first target area image set and the plurality of second target area image sets to obtain a plurality of first image sets and a plurality of second image sets;
a second obtaining unit, configured to perform weight distribution according to the color level variation degree of the images in the plurality of first image sets, to obtain a first weight distribution result;
a third execution unit, configured to input the plurality of first image sets and the plurality of second image sets into an image recognition analysis model to obtain a plurality of first output results and a plurality of second output results;
and the fourth execution unit is used for carrying out weighting adjustment on the plurality of first output results according to the first weight distribution result and obtaining a first target identification result by combining the plurality of second output results.
2. The system of claim 1, further comprising:
a third obtaining unit, configured to construct and train to obtain the generative confrontation network model;
a fourth obtaining unit, configured to divide the images in the first target area image set according to a time sequence to obtain a first image set;
a fifth obtaining unit, configured to input the images in the first image set into the generative confrontation network model, generate and obtain a plurality of images according to each image, and obtain a plurality of second image sets;
a sixth obtaining unit, configured to sort, according to the time sequence, the images in the plurality of second image sets to obtain the plurality of second target area image sets.
3. The system of claim 2, further comprising:
a fifth execution unit, configured to acquire a third target area image set of the first target located in the target area;
a seventh obtaining unit, configured to obtain a training sample and a test sample according to the third target area image set;
a sixth execution unit, configured to construct a generation model and a discrimination model;
an eighth obtaining unit, configured to input the training sample into the generation model, superimpose noise, and generate an acquired confrontation sample;
a seventh execution unit, configured to input the confrontation sample into the discriminant model, and distinguish the confrontation sample from the training sample;
the eighth execution unit is used for updating parameters of the generation model and the discrimination model and performing iteration;
a ninth execution unit, configured to obtain the generated confrontation network model until the generated confrontation sample and the training sample cannot be distinguished by the distinguishing model.
4. The system of claim 1, further comprising:
a tenth execution unit, configured to extract images of a first preset frame number in the first target area image set and the plurality of second target area image sets respectively to obtain a first preprocessed image set;
an eleventh execution unit, configured to perform image quality enhancement processing on the first preprocessed image set to obtain the first image set;
a twelfth execution unit, configured to extract images with a second preset frame number in the first target area image set and the second target area image sets respectively to obtain a second preprocessed image set, where the second preset frame number is greater than the first preset frame number;
a thirteenth execution unit, configured to perform image quality reduction processing on the second preprocessed image set to obtain the second image set.
5. The system of claim 1, further comprising:
a fourteenth execution unit, configured to perform graying processing on the images in the plurality of first image sets to obtain a plurality of first grayscale image sets;
a fifteenth executing unit, configured to extract and obtain grayscale values in the multiple first grayscale image sets, so as to obtain multiple grayscale distribution sets;
a sixteenth execution unit, configured to obtain differences between a maximum grayscale value and a minimum grayscale value in the multiple grayscale and grayscale distribution sets, respectively, to obtain multiple grayscale differences;
a ninth obtaining unit, configured to perform gray scale value clustering according to the multiple gray scale distribution sets, respectively, to obtain multiple gray scale cluster sets;
a seventeenth executing unit, configured to perform weight assignment according to the multiple grayscale differences and the multiple grayscale clustering sets, to obtain the first weight assignment result.
6. The system of claim 5, further comprising:
a tenth obtaining unit, configured to perform weight distribution according to difference values of the multiple gray differences, respectively, to obtain a second weight distribution result;
an eighteenth execution unit, configured to perform weight distribution according to the number of clustering results in the multiple grayscale clustering sets and a grayscale difference between the clustering results, respectively, to obtain a third weight distribution result;
a nineteenth execution unit, configured to perform weighting processing on the second weight distribution result according to the third weight distribution result, to obtain the first weight distribution result.
7. The system of claim 1, further comprising:
a twentieth execution unit, configured to construct the image recognition analysis model, where the image recognition analysis model includes a first recognition channel and a second recognition channel;
an eleventh obtaining unit, configured to input the plurality of first image sets into the first recognition channel, and obtain the plurality of first recognition results;
a twelfth obtaining unit, configured to input the plurality of second image sets into the second recognition channel, and obtain the plurality of second recognition results.
8. A method for improving image recognition accuracy rate based on intelligent AI, which is applied to the system for improving image recognition accuracy rate based on intelligent AI, the method comprises:
acquiring a first target area image set, wherein the target area is an area where a first target is located;
generating and obtaining a plurality of second target area image sets according to the first target area image set by adopting a generative confrontation network model;
preprocessing images in the first target area image set and the second target area image sets to obtain a plurality of first image sets and a plurality of second image sets;
according to the color level change degree of the images in the plurality of first image sets, carrying out weight distribution to obtain a first weight distribution result;
inputting the plurality of first image sets and the plurality of second image sets into an image recognition analysis model to obtain a plurality of first output results and a plurality of second output results;
and performing weighting adjustment on the plurality of first output results according to the first weight distribution result, and combining the plurality of second output results to obtain a first target identification result.
9. A system for improving image recognition accuracy based on intelligent AI, comprising: a processor coupled with a memory for storing a program that, when executed by the processor, causes a system to perform the system of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, implements the system of any one of claims 1-7.
CN202210536370.1A 2022-05-18 2022-05-18 System for improving image recognition accuracy rate based on intelligent AI Withdrawn CN114819099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210536370.1A CN114819099A (en) 2022-05-18 2022-05-18 System for improving image recognition accuracy rate based on intelligent AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210536370.1A CN114819099A (en) 2022-05-18 2022-05-18 System for improving image recognition accuracy rate based on intelligent AI

Publications (1)

Publication Number Publication Date
CN114819099A true CN114819099A (en) 2022-07-29

Family

ID=82515156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210536370.1A Withdrawn CN114819099A (en) 2022-05-18 2022-05-18 System for improving image recognition accuracy rate based on intelligent AI

Country Status (1)

Country Link
CN (1) CN114819099A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359683A (en) * 2023-02-28 2023-06-30 四川大唐国际甘孜水电开发有限公司 Partial discharge mode identification method and system based on information interaction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359683A (en) * 2023-02-28 2023-06-30 四川大唐国际甘孜水电开发有限公司 Partial discharge mode identification method and system based on information interaction
CN116359683B (en) * 2023-02-28 2023-12-26 四川大唐国际甘孜水电开发有限公司 Partial discharge mode identification method and system based on information interaction

Similar Documents

Publication Publication Date Title
CN108898175B (en) Computer-aided model construction method based on deep learning gastric cancer pathological section
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN106228185B (en) A kind of general image classifying and identifying system neural network based and method
CN111582225B (en) Remote sensing image scene classification method and device
CN105654066A (en) Vehicle identification method and device
CN111339935B (en) Optical remote sensing picture classification method based on interpretable CNN image classification model
CN112446891A (en) Medical image segmentation method based on U-Net network brain glioma
CN113761259A (en) Image processing method and device and computer equipment
CN114155397B (en) Small sample image classification method and system
CN111932529A (en) Image segmentation method, device and system
CN110930378A (en) Emphysema image processing method and system based on low data demand
CN111611851A (en) Model generation method, iris detection method and device
CN112883931A (en) Real-time true and false motion judgment method based on long and short term memory network
CN116958637A (en) Training method, device, equipment and storage medium of image detection model
WO2023108873A1 (en) Brain network and brain addiction connection calculation method and apparatus
CN114819099A (en) System for improving image recognition accuracy rate based on intelligent AI
Wang et al. Single trunk multi-scale network for micro-expression recognition
CN112836755B (en) Sample image generation method and system based on deep learning
CN114241240A (en) Method and device for classifying brain images, electronic equipment and storage medium
CN112580445B (en) Human body gait image visual angle conversion method based on generation of confrontation network
CN112818774A (en) Living body detection method and device
CN111860601A (en) Method and device for predicting large fungus species
CN114387524B (en) Image identification method and system for small sample learning based on multilevel second-order representation
CN117011566A (en) Target detection method, detection model training method, device and electronic equipment
CN115292538A (en) Map line element extraction method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220729