CN117152648B - Auxiliary teaching picture recognition device based on augmented reality - Google Patents
Auxiliary teaching picture recognition device based on augmented reality Download PDFInfo
- Publication number
- CN117152648B CN117152648B CN202311418555.3A CN202311418555A CN117152648B CN 117152648 B CN117152648 B CN 117152648B CN 202311418555 A CN202311418555 A CN 202311418555A CN 117152648 B CN117152648 B CN 117152648B
- Authority
- CN
- China
- Prior art keywords
- image
- module
- information
- dimensional
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 55
- 238000011156 evaluation Methods 0.000 claims abstract description 26
- 238000004458 analytical method Methods 0.000 claims abstract description 15
- 230000000007 visual effect Effects 0.000 claims description 29
- 238000004364 calculation method Methods 0.000 claims description 19
- 238000013441 quality evaluation Methods 0.000 claims description 17
- 238000012937 correction Methods 0.000 claims description 11
- 241001270131 Agaricus moelleri Species 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 5
- 238000001303 quality assessment method Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 description 8
- 239000002131 composite material Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/36—Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/418—Document matching, e.g. of document images
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/0061—Geography
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Tourism & Hospitality (AREA)
- Nonlinear Science (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of teaching auxiliary devices, and discloses an auxiliary teaching image recognition device based on augmented reality, in particular to an image acquisition module, an image processing and recognition module, an information parameter acquisition module, a sample recognition processing module, a vision contrast processing module, an information quality processing module, an image matching analysis module and a comprehensive evaluation and feedback module.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to an auxiliary teaching graph-learning device based on augmented reality.
Background
The brightest feature of the geography is that the geography image is used as a carrier for intuitively conveying and concentrating information, so that image teaching is the most central teaching task of a geography teacher in middle school, in the geography teaching of middle school, geography concepts are generally associated with geography phenomena, geography processes and geography features, abstract geography concepts can be embodied and visualized through image presentation, students can better understand and memorize geography knowledge, the image teaching utilizes real geography images and satellite images to display real geography environments and landscapes, students can know features such as landforms, climates and vegetation of different areas through observing the real images, and the understanding of the reality and the intuitiveness of the geography information is enhanced.
However, there are many abstract contents, and satisfactory teaching effects cannot be obtained only by the conventional teaching method, and there are some limitations in using two-dimensional images in geography teaching: the information is limited: the two-dimensional image is limited by size and proportion, and cannot simultaneously display the relationship among a plurality of geographic elements or a plurality of geographic processes, and sometimes the complexity of a specific geographic phenomenon cannot be comprehensively expressed; local focusing: the two-dimensional image can only show the characteristics and phenomena of a specific area, and cannot fully show the whole geographical environment or the whole view of the region, so that students have a shortage of understanding the geographical system and the whole relationship; the abstraction degree is high: in order to simplify and beautify geographic phenomena, two-dimensional images often need to be symbolized and abstracted, resulting in the loss or distortion of part of geographic information; loss of stereoscopic impression: the two-dimensional image cannot present the stereoscopic impression of geographic features, such as the height of mountains, the fluctuation of terrains and the like, and the existence of the limitations can lead to errors of image information and the lack of authenticity of image teaching.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an auxiliary teaching and image recognition device based on augmented reality, which aims to solve the problems in the background art.
The invention provides the following technical scheme: the auxiliary teaching image recognition device based on augmented reality comprises an image acquisition module, an image processing and recognition module, an information parameter acquisition module, a sample recognition processing module, a visual contrast processing module, an information quality processing module, an image matching analysis module and a comprehensive evaluation and feedback module;
the image acquisition module aims at a target object to be acquired by assisting the teaching and learning image recognition device, sets image acquisition parameters and acquires a two-dimensional image through image acquisition equipment, and transmits the acquired two-dimensional image to the image processing and recognition module;
the image processing and identifying module is used for extracting two-dimensional image information to generate a three-dimensional model based on the two-dimensional image acquired by the image acquisition module, processing and analyzing the two-dimensional image and the three-dimensional model by utilizing computer vision and image processing technology, and transmitting the processed and analyzed information to the information parameter acquisition module;
the information parameter acquisition module acquires sample identification parameters obtained by comparing the two-dimensional image with the three-dimensional model based on the information obtained by processing and analyzing in the image processing and identification module, and visual contrast parameters and information quality parameters;
the sample identification processing module transmits the sample identification parameters in the information parameter acquisition module to an image sample identification function model, and calculates sample accuracy values of the two-dimensional image and the three-dimensional model;
the visual contrast processing module transmits the visual contrast parameters in the information parameter acquisition module to a visual contrast function model, and calculates out the visual consistency coefficients of the two-dimensional image and the three-dimensional model;
the information quality processing module transmits the information quality parameters in the information parameter acquisition module to an information quality evaluation model, and calculates information quality evaluation coefficients of a two-dimensional image and a three-dimensional model;
the image matching analysis module calculates the comprehensive image matching degree based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient, and transmits the comprehensive image matching degree to the comprehensive evaluation and feedback module;
the comprehensive evaluation and feedback module judges whether the two-dimensional image is accurate or not according to the comprehensive image matching degree calculated by the image matching analysis module, corrects the two-dimensional image according to the result and finally feeds the result back to the client.
Preferably, the target object in the image acquisition module refers to a two-dimensional graph on a geographical textbook, the image acquisition equipment comprises a sensor and a camera, the set image acquisition parameters comprise resolution, frame rate and exposure time, and the image acquisition environment is required to meet the requirements during image acquisition, so that the light is sufficient and no interference exists.
Preferably, the image processing and identifying module extracts two-dimensional image information to generate a three-dimensional model, namely dividing an image into two or more gray intervals with equal intervals or unequal intervals according to different gray scales, mainly utilizing the difference between a detection target and a background on gray scales, selecting one or more gray thresholds, classifying pixels according to the comparison result of pixel gray scales and the thresholds, and respectively marking pixels of different categories by using different numerical values, thereby extracting the information of the two-dimensional image, and generating the three-dimensional model.
Preferably, the sample identification parameters in the information parameter obtaining module include positive samples correctly identified as positive samples, negative samples correctly identified as negative samples, negative samples incorrectly identified as positive samples and positive samples incorrectly identified as negative samples, the visual contrast parameters include color mean differences, average brightness differences and contrast similarities, and the information quality parameters include brightness mean values of the original image and the estimated image, brightness variances of the original image and the estimated image and brightness covariance of the original image and the estimated image.
Preferably, the sample accuracy in the sample identification processing module is calculated as follows:
step S01: positive samples are correctly identified as positive sample numbers TP, negative samples are correctly identified as negative sample numbers TN, negative samples are incorrectly identified as positive sample numbers FP, and positive samples are incorrectly identified as negative sample numbers FN;
step S02: the calculation formula of the accuracy rate is expressed asWherein->Express accuracy>Representing the correct recognition as positive number of samples, +.>Indicating that the negative sample was incorrectly identified as a positive number of samples;
step S03: the calculation formula of the recall rate is expressed asWherein->Representing recall->Representing the correct recognition as positive number of samples, +.>Indicating that positive samples were incorrectly identified as negative sample numbers;
step S04: the sample accuracy is the harmonic mean of the accuracy and the recall, and the calculation formula isWherein->Representing the sample accuracy.
Preferably, the calculation formula of the visual consistency coefficient in the visual contrast processing module is as follows:wherein->Representing information quality assessment coefficient, < >>And->Represents the luminance mean value of the original image and the evaluation image, respectively,/->And->Luminance variance, < >, respectively representing the original image and the evaluation image>Representing the luminance covariance of the original image and the evaluation image, C1 and C2 are constants.
Preferably, the comprehensive image matching degree in the image matching analysis module is calculated based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient, and the calculation formula of the comprehensive image matching degree is as followsWherein->Representing the degree of matching of the composite image, < > and->Influence factor representing the degree of matching of the integrated image, +.>、/>And->Expressed as a constant.
Preferably, the comprehensive evaluation and feedback module compares the comprehensive image matching degree calculated based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient with a preset standard threshold, if the matching degree exceeds the set standard threshold, the image is considered to be sufficiently matched without correction, if the matching degree is lower than the set standard threshold, correction is required according to the result, the result is finally fed back to the client, the corrected image is subjected to re-matching degree evaluation, and the correction effect is judged.
The invention has the technical effects and advantages that:
the invention collects two-dimensional images through the image collecting module, the image processing and identifying module receives two-dimensional image extraction information to generate a three-dimensional model, the information parameter obtaining module obtains sample identification parameters obtained by comparing the two-dimensional images with the three-dimensional model, visual contrast parameters and information quality parameters, the parameters obtained by the information obtaining module are substituted into corresponding function models, sample accuracy, visual consistency coefficients and information quality assessment coefficients are calculated, the image matching analysis module obtains comprehensive image matching degree, the comprehensive assessment and feedback module judges whether the two-dimensional images are accurate according to the comprehensive image matching degree and corrects according to the result, in a word, an auxiliary teaching image identifying device based on augmented reality can accurately identify and process the images, improve the accuracy of the images in geographic textbooks, correct the geographic textbooks to more accurately present geographic features and geographic phenomena, and through updating or redesigning the images, the images can be ensured to be consistent with actual conditions, the accurate understanding of geographic information by students can be enhanced, the elements and features of the geographic phenomena can be more completely presented, more comprehensive geographic knowledge can be provided, and more abundant auxiliary tools can be provided for teaching.
Drawings
Fig. 1 is a flow chart of an augmented reality-based auxiliary teaching and image recognition device.
Detailed Description
The embodiments of the present invention will be clearly and completely described below with reference to the drawings in the present invention, and the configurations of the structures described in the following embodiments are merely examples, and the auxiliary teaching and image recognition device based on augmented reality according to the present invention is not limited to the structures described in the following embodiments, and all other embodiments obtained by a person having ordinary skill in the art without making any creative effort are within the scope of the present invention.
Example 1: referring to fig. 1, the invention provides an augmented reality-based auxiliary teaching image recognition device, which comprises an image acquisition module, an image processing and recognition module, an information parameter acquisition module, a sample recognition processing module, a visual contrast processing module, an information quality processing module, an image matching analysis module and a comprehensive evaluation and feedback module.
In this embodiment, it needs to be specifically described that the image acquisition module aligns the auxiliary learning image recognition device with the target object to be acquired, sets image acquisition parameters and acquires a two-dimensional image through the image acquisition device, and transmits the acquired two-dimensional image to the image processing and recognition module;
the target object in the image acquisition module refers to a two-dimensional graph on a geographical textbook, the image acquisition equipment comprises a sensor and a camera, the set image acquisition parameters comprise resolution, frame rate and exposure time, and the image acquisition environment is required to meet the requirements during image acquisition, so that the light is sufficient and no interference exists.
In this embodiment, it should be specifically described that, based on the two-dimensional image acquired by the image acquisition module, the image processing and recognition module extracts two-dimensional image information to generate a three-dimensional model, processes and analyzes the two-dimensional image and the three-dimensional model by using computer vision and image processing technology, and transmits the processed and analyzed information to the information parameter acquisition module;
the image processing and identifying module extracts two-dimensional image information to generate a three-dimensional model, namely dividing an image into two or more gray intervals with equal intervals or unequal intervals according to different gray scales, mainly utilizing the difference between a detection target and a background on gray scales, selecting one or more gray thresholds, classifying pixels according to the comparison result of pixel gray scales and the thresholds, and respectively marking different types of pixels with different numerical values, thereby extracting the information of the two-dimensional image, and generating the three-dimensional model.
In this embodiment, it needs to be specifically described that, based on the information obtained by processing and analyzing in the image processing and identifying module, the information parameter obtaining module obtains a sample identifying parameter, a visual contrast parameter and an information quality parameter, which are obtained by comparing a two-dimensional image with a three-dimensional model;
the information parameter obtaining module is characterized in that the sample identification parameters comprise positive samples which are correctly identified as positive sample numbers, negative samples which are correctly identified as negative sample numbers, negative samples which are incorrectly identified as positive sample numbers and positive samples which are incorrectly identified as negative sample numbers, the visual contrast parameters comprise color mean differences, average brightness differences and contrast similarities, and the information quality parameters comprise brightness mean values of an original image and an estimated image, brightness variances of the original image and the estimated image and brightness covariance of the original image and the estimated image.
In this embodiment, it needs to be specifically described that the sample recognition processing module transmits the sample recognition parameters in the information parameter acquisition module to the image sample recognition function model, and calculates the sample accuracy values of the two-dimensional image and the three-dimensional model;
the sample accuracy in the sample identification processing module is calculated as follows:
step S01: positive samples are correctly identified as positive sample numbers TP, negative samples are correctly identified as negative sample numbers TN, negative samples are incorrectly identified as positive sample numbers FP, and positive samples are incorrectly identified as negative sample numbers FN;
step S02: the calculation formula of the accuracy rate is expressed asWherein->Express accuracy>Representing the correct recognition as positive number of samples, +.>Indicating that the negative sample was incorrectly identified as a positive number of samples;
step S03: the calculation formula of the recall rate is expressed asWherein->Representing recall->Representing the correct recognition as positive number of samples, +.>Indicating that positive samples were incorrectly identified as negative sample numbers;
step S04: the sample accuracy is the harmonic mean of the accuracy and the recall, and the calculation formula isWherein->Representing the sample accuracy.
In this embodiment, it needs to be specifically described that the visual contrast processing module transmits the visual contrast parameter in the information parameter acquisition module to the visual contrast function model, and calculates a visual consistency coefficient of the two-dimensional image and the three-dimensional stereoscopic model;
the calculation formula of the vision consistency coefficient in the vision contrast processing module is as follows:wherein->Representing the visual consistency factor,/->、/>Representing the influencing factors->Representing contrast similarity>Representing color mean difference, ++>Representing the average luminance difference.
In this embodiment, it needs to be specifically described that the information quality processing module transmits the information quality parameters in the information parameter acquisition module to the information quality evaluation model, and calculates to obtain the information quality evaluation coefficients of the two-dimensional image and the three-dimensional stereoscopic model;
the calculation formula of the information quality evaluation coefficient in the information quality processing module is as follows:wherein->Representing information quality assessment coefficient, < >>And->Represents the luminance mean value of the original image and the evaluation image, respectively,/->And->Luminance variance, < >, respectively representing the original image and the evaluation image>Representing the luminance covariance of the original image and the evaluation image, C1 and C2 are constants.
In this embodiment, it needs to be specifically described that the image matching analysis module calculates the comprehensive image matching degree based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient, and transmits the comprehensive image matching degree to the comprehensive evaluation and feedback module;
the comprehensive image matching degree in the image matching analysis module is calculated based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient, and the calculation formula of the comprehensive image matching degree is as followsWherein->Representing the degree of matching of the composite image, < > and->Influence factor representing the degree of matching of the integrated image, +.>、/>And->Expressed as a constant.
In this embodiment, it needs to be specifically described that the comprehensive evaluation and feedback module determines whether the two-dimensional image is accurate according to the comprehensive image matching degree calculated by the image matching analysis module, corrects the two-dimensional image according to the result, and finally feeds back the result to the client;
the comprehensive evaluation and feedback module compares the comprehensive image matching degree calculated based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient with a preset standard threshold, if the matching degree exceeds the preset standard threshold, the image is considered to be sufficiently matched without correction, if the matching degree is lower than the preset standard threshold, correction is needed according to the result, the correction mode can comprise image alignment, color correction, brightness adjustment and filtering, the result is finally fed back to the client, the re-matching degree evaluation is carried out on the corrected image, the calculation evaluation can be carried out by using the same comprehensive image matching degree formula, and the correction effect judgment is carried out.
Example 2: the specific difference between this embodiment and embodiment 1 is thatThe method comprises the following specific calculation processes of the registration measurement indexes of the feature points:
step S01: the characteristic points after the two-dimensional image and the three-dimensional model are processed and analyzed are marked one by one and are recorded as 1, 2.
Step S02: the number of the feature point pairs is recorded as N, and d (i) represents a matching error corresponding to the ith feature point;
step S03: the registration metric index of the feature points is calculated as followsWherein->Registration metric index representing feature points, +.>Representing the matching error corresponding to the ith feature point,/->Representing the number of feature point pairs.
The invention collects two-dimensional images through the image collecting module, the image processing and identifying module receives two-dimensional image extraction information to generate a three-dimensional model, the information parameter obtaining module obtains sample identification parameters obtained by comparing the two-dimensional images with the three-dimensional model, visual contrast parameters and information quality parameters, the parameters obtained by the information obtaining module are substituted into corresponding function models, sample accuracy, visual consistency coefficients and information quality assessment coefficients are calculated, the image matching analysis module obtains comprehensive image matching degree, the comprehensive assessment and feedback module judges whether the two-dimensional images are accurate according to the comprehensive image matching degree and corrects according to the result, in a word, an auxiliary teaching image identifying device based on augmented reality can accurately identify and process the images, improve the accuracy of the images in geographic textbooks, correct the geographic textbooks to more accurately present geographic features and geographic phenomena, and through updating or redesigning the images, the images can be ensured to be consistent with actual conditions, the accurate understanding of geographic information by students can be enhanced, the elements and features of the geographic phenomena can be more completely presented, more comprehensive geographic knowledge can be provided, and more abundant auxiliary tools can be provided for teaching.
Finally: the foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (5)
1. The auxiliary teaching image recognition device based on augmented reality is characterized by comprising an image acquisition module, an image processing and recognition module, an information parameter acquisition module, a sample recognition processing module, a visual contrast processing module, an information quality processing module, an image matching analysis module and a comprehensive evaluation and feedback module;
the image acquisition module aims at a target object to be acquired by assisting the teaching and learning image recognition device, sets image acquisition parameters and acquires a two-dimensional image through image acquisition equipment, and transmits the acquired two-dimensional image to the image processing and recognition module, wherein the set image acquisition parameters comprise resolution, frame rate and exposure time, and the image acquisition environment is required to be ensured to meet the requirements during image acquisition, and the light is sufficient without interference;
the image processing and identifying module is used for extracting two-dimensional image information to generate a three-dimensional model based on the two-dimensional image acquired by the image acquisition module, processing and analyzing the two-dimensional image and the three-dimensional model by utilizing computer vision and image processing technology, and transmitting the processed and analyzed information to the information parameter acquisition module;
the information parameter acquisition module acquires sample identification parameters, visual contrast parameters and information quality parameters obtained by comparing the two-dimensional image with the three-dimensional model based on the information obtained by processing and analyzing in the image processing and identification module;
the sample identification processing module transmits the sample identification parameters in the information parameter acquisition module to an image sample identification function model, and calculates sample accuracy values of the two-dimensional image and the three-dimensional model;
the visual contrast processing module transmits the visual contrast parameters in the information parameter acquisition module to a visual contrast function model, and calculates out the visual consistency coefficients of the two-dimensional image and the three-dimensional model;
the information quality processing module transmits the information quality parameters in the information parameter acquisition module to an information quality evaluation model, and calculates information quality evaluation coefficients of a two-dimensional image and a three-dimensional model;
the image matching analysis module calculates the comprehensive image matching degree based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient, and transmits the comprehensive image matching degree to the comprehensive evaluation and feedback module;
the comprehensive evaluation and feedback module judges whether the two-dimensional image is accurate or not according to the comprehensive image matching degree calculated by the image matching analysis module, corrects the two-dimensional image according to the result and finally feeds the result back to the client;
the auxiliary teaching graph recognition device based on augmented reality is characterized in that: the sample accuracy in the sample identification processing module is calculated as follows:
step S01: positive samples are correctly identified as positive sample numbers TP, negative samples are correctly identified as negative sample numbers TN, negative samples are incorrectly identified as positive sample numbers FP, and positive samples are incorrectly identified as negative sample numbers FN;
step S02: the calculation formula of the accuracy rate is expressed asWherein->Express accuracy>Representing the correct recognition as positive number of samples, +.>Indicating that the negative sample was incorrectly identified as a positive number of samples;
step S03: the calculation formula of the recall rate is expressed asWherein->Representing recall->Representing the correct recognition as positive number of samples, +.>Indicating that positive samples were incorrectly identified as negative sample numbers;
step S04: the sample accuracy is the harmonic mean of the accuracy and the recall, and the calculation formula isWherein->Representing the sample accuracy;
the auxiliary teaching graph recognition device based on augmented reality is characterized in that: the calculation formula of the vision consistency coefficient in the vision contrast processing module is as follows:wherein->Representing the visual consistency factor,/->、/>Representing an influence factor, y representing a contrast similarity, c representing a color mean difference, and h representing an average brightness difference;
the auxiliary teaching graph recognition device based on augmented reality is characterized in that: the calculation formula of the information quality evaluation coefficient in the information quality processing module is as follows:wherein->Representing information quality assessment coefficient, < >>And->Represents the luminance mean value of the original image and the evaluation image, respectively,/->And->Luminance variance, < >, respectively representing the original image and the evaluation image>Luminance covariance representing the original image and the evaluation image, C1 and C2 being constants;
the auxiliary teaching graph recognition device based on augmented reality is characterized in that: the comprehensive image matching degree in the image matching analysis module is calculated based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient, and the calculation formula of the comprehensive image matching degree is as followsWherein->The degree of matching of the integrated image is represented,influence factor representing the degree of matching of the integrated image, +.>、/>And->Expressed as a constant.
2. The augmented reality-based auxiliary teaching and graphics recognition device according to claim 1, wherein: the target object in the image acquisition module refers to a two-dimensional graph on a geographical textbook, and the image acquisition equipment comprises a sensor and a camera.
3. The augmented reality-based auxiliary teaching and graphics recognition device according to claim 1, wherein: the image processing and identifying module extracts two-dimensional image information to generate a three-dimensional model, namely dividing an image into two or more gray intervals with equal intervals or unequal intervals according to different gray scales, mainly utilizing the difference between a detection target and a background on gray scales, selecting one or more gray thresholds, classifying pixels according to the comparison result of pixel gray scales and the thresholds, and respectively marking different types of pixels with different numerical values, thereby extracting the information of the two-dimensional image, and generating the three-dimensional model.
4. The augmented reality-based auxiliary teaching and graphics recognition device according to claim 1, wherein: the information parameter obtaining module is characterized in that the sample identification parameters comprise positive samples which are correctly identified as positive sample numbers, negative samples which are correctly identified as negative sample numbers, negative samples which are incorrectly identified as positive sample numbers and positive samples which are incorrectly identified as negative sample numbers, the visual contrast parameters comprise color mean differences, average brightness differences and contrast similarities, and the information quality parameters comprise brightness mean values of an original image and an estimated image, brightness variances of the original image and the estimated image and brightness covariance of the original image and the estimated image.
5. The augmented reality-based auxiliary teaching and graphics recognition device according to claim 1, wherein: the comprehensive evaluation and feedback module compares the comprehensive image matching degree calculated based on the sample accuracy, the vision consistency coefficient and the information quality evaluation coefficient with a preset standard threshold, if the matching degree exceeds the preset standard threshold, the images are considered to be sufficiently matched without correction, if the matching degree is lower than the preset standard threshold, correction is needed according to the result, the result is finally fed back to the client, the corrected images are subjected to re-matching degree evaluation, and the correction effect is judged.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311418555.3A CN117152648B (en) | 2023-10-30 | 2023-10-30 | Auxiliary teaching picture recognition device based on augmented reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311418555.3A CN117152648B (en) | 2023-10-30 | 2023-10-30 | Auxiliary teaching picture recognition device based on augmented reality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117152648A CN117152648A (en) | 2023-12-01 |
CN117152648B true CN117152648B (en) | 2023-12-26 |
Family
ID=88910475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311418555.3A Active CN117152648B (en) | 2023-10-30 | 2023-10-30 | Auxiliary teaching picture recognition device based on augmented reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117152648B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117455903B (en) * | 2023-12-18 | 2024-08-20 | 深圳市焕想科技有限公司 | Sports apparatus state evaluation method based on image processing technology |
CN117437235B (en) * | 2023-12-21 | 2024-03-12 | 四川新康意众申新材料有限公司 | Plastic film quality detection method based on image processing |
CN117788461B (en) * | 2024-02-23 | 2024-05-07 | 华中科技大学同济医学院附属同济医院 | Magnetic resonance image quality evaluation system based on image analysis |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271469A (en) * | 2008-05-10 | 2008-09-24 | 深圳先进技术研究院 | Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method |
WO2011041925A1 (en) * | 2009-10-09 | 2011-04-14 | 江苏大学 | Intelligent evaluation method for famous and high-quality tea evaluation apparatus based on multi-sensor information fusion |
WO2017084186A1 (en) * | 2015-11-18 | 2017-05-26 | 华南理工大学 | System and method for automatic monitoring and intelligent analysis of flexible circuit board manufacturing process |
CN109034841A (en) * | 2018-07-11 | 2018-12-18 | 宁波艾腾湃智能科技有限公司 | Art work identification, displaying and the transaction platform compared based on digitized image/model |
KR20220111634A (en) * | 2021-02-02 | 2022-08-09 | 화웨이 그룹(광둥)컴퍼니 리미티드 | Online offline combined multidimensional education AI school system |
-
2023
- 2023-10-30 CN CN202311418555.3A patent/CN117152648B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271469A (en) * | 2008-05-10 | 2008-09-24 | 深圳先进技术研究院 | Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method |
WO2011041925A1 (en) * | 2009-10-09 | 2011-04-14 | 江苏大学 | Intelligent evaluation method for famous and high-quality tea evaluation apparatus based on multi-sensor information fusion |
WO2017084186A1 (en) * | 2015-11-18 | 2017-05-26 | 华南理工大学 | System and method for automatic monitoring and intelligent analysis of flexible circuit board manufacturing process |
CN109034841A (en) * | 2018-07-11 | 2018-12-18 | 宁波艾腾湃智能科技有限公司 | Art work identification, displaying and the transaction platform compared based on digitized image/model |
KR20220111634A (en) * | 2021-02-02 | 2022-08-09 | 화웨이 그룹(광둥)컴퍼니 리미티드 | Online offline combined multidimensional education AI school system |
Also Published As
Publication number | Publication date |
---|---|
CN117152648A (en) | 2023-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117152648B (en) | Auxiliary teaching picture recognition device based on augmented reality | |
CN112818988B (en) | Automatic identification reading method and system for pointer instrument | |
CN102521560B (en) | Instrument pointer image identification method of high-robustness rod | |
ZA202300610B (en) | System and method for crop monitoring | |
CN109520706B (en) | Screw hole coordinate extraction method of automobile fuse box | |
TWI716012B (en) | Sample labeling method, device, storage medium and computing equipment, damage category identification method and device | |
CN111639629B (en) | Pig weight measurement method and device based on image processing and storage medium | |
CN105279772B (en) | A kind of trackability method of discrimination of infrared sequence image | |
CN114279433B (en) | Automatic map data production method, related device and computer program product | |
CN111340749A (en) | Image quality detection method, device, equipment and storage medium | |
CN113592839B (en) | Distribution network line typical defect diagnosis method and system based on improved fast RCNN | |
CN108447092B (en) | Method and device for visually positioning marker | |
CN115019294A (en) | Pointer instrument reading identification method and system | |
CN112991425B (en) | Water area water level extraction method and system and storage medium | |
CN116563391B (en) | Automatic laser structure calibration method based on machine vision | |
CN109784257B (en) | Transformer thermometer detection and identification method | |
CN112284509A (en) | Bridge structure vibration mode measuring method based on mobile phone video | |
CN116310263A (en) | Pointer type aviation horizon instrument indication automatic reading implementation method | |
CN114677670B (en) | Method for automatically identifying and positioning identity card tampering | |
CN115631169A (en) | Product detection method and device, electronic equipment and storage medium | |
CN112232272B (en) | Pedestrian recognition method by fusing laser and visual image sensor | |
CN111862109B (en) | System and device for multi-target acquisition, image recognition and automatic labeling of recognition results | |
CN113674212A (en) | Handle assembly detection method and device | |
CN113780222A (en) | Face living body detection method and device, electronic equipment and readable storage medium | |
CN113076941A (en) | Single pointer dial reading identification method based on video frame fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |