CN113160423A - Detection guiding method, system, device and storage medium based on augmented reality - Google Patents

Detection guiding method, system, device and storage medium based on augmented reality Download PDF

Info

Publication number
CN113160423A
CN113160423A CN202110440566.6A CN202110440566A CN113160423A CN 113160423 A CN113160423 A CN 113160423A CN 202110440566 A CN202110440566 A CN 202110440566A CN 113160423 A CN113160423 A CN 113160423A
Authority
CN
China
Prior art keywords
detected
item
identification information
augmented reality
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110440566.6A
Other languages
Chinese (zh)
Inventor
郑振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Companion Technology Co ltd
Original Assignee
Hangzhou Companion Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Companion Technology Co ltd filed Critical Hangzhou Companion Technology Co ltd
Priority to CN202110440566.6A priority Critical patent/CN113160423A/en
Publication of CN113160423A publication Critical patent/CN113160423A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The application relates to a detection guiding method, a system, a device and a storage medium based on augmented reality, wherein identification information of an object to be detected is obtained by identifying the object to be detected; acquiring a virtual model of an object to be detected, a project to be detected and position information of the project to be detected from a pre-stored database according to the identification information; the virtual model is displayed in a spatial area where the visual field of the user reaches in an overlapping mode, the item to be detected is marked on the virtual model displayed in the overlapping mode according to the position information of the item to be detected, the problem that the detection efficiency is low in the related technology is solved, and the detection efficiency is improved.

Description

Detection guiding method, system, device and storage medium based on augmented reality
Technical Field
The present application relates to the field of industrial detection technologies, and in particular, to a detection guidance method, system, apparatus, and storage medium based on augmented reality.
Background
The factory assembly line often will carry out quality control to spare part before dispatching from the factory, obtains the data including size or quality, generally adopts manual or semi-automatic mode to carry out quality control at present, and the correlation technique provides a quality control mode, and its process is as follows:
step 1, inputting a product identification code (digital string) on a PAD (Portable Android Device, tablet personal computer), and displaying specific parameters of a product after identifying the model of the product;
step 2, measuring the size of the product by using a vernier caliper, such as the overall size or the local size of the product;
step 3, measuring the quality of the product by using a balance;
and 4, after all the parameters to be measured are measured, judging whether the product is qualified by inspection by workers, and filling the qualified data of the product and the original data obtained by measurement into the PAD for subsequent examination.
The current quality inspection mode depends on the experience of workers, particularly when a certain product needs to measure a plurality of positions, the problems of error detection and missing detection are easy to occur, and after the parameters are measured by the workers, whether qualified data are detected and original data obtained by measurement need to be filled in a PAD, and then the PAD is uploaded for subsequent examination and verification, so that the operation is complex.
Aiming at the problem of low detection efficiency in the related technology, no effective solution is provided at present.
Disclosure of Invention
The embodiment provides a detection guiding method, a detection guiding system, a detection guiding device and a storage medium based on augmented reality, so as to solve the problem of low detection efficiency in the related art.
In a first aspect, in this embodiment, an augmented reality-based detection guidance method is provided, including:
identifying an object to be detected to obtain identification information of the object to be detected;
acquiring a virtual model of the object to be detected, a project to be detected and position information of the project to be detected from a pre-stored database according to the identification information;
and displaying the virtual model in a superposed manner in a spatial area where a user view reaches, and marking the item to be detected on the virtual model displayed in a superposed manner according to the position information of the item to be detected.
In some embodiments, before obtaining the virtual model of the object to be detected, the item to be detected, and the location information of the item to be detected from a pre-stored database according to the identification information, the method further includes:
acquiring a detection task, wherein the detection task carries information for measuring a preset item and a preset position of an object to be detected corresponding to the identification information;
and configuring a virtual model of the object to be detected, the item to be detected and the position information of the item to be detected corresponding to the identification information in a pre-stored database according to the detection task.
In some embodiments, identifying the object to be detected to obtain the identification information of the object to be detected includes:
acquiring a first measured parameter obtained after a first to-be-measured item of the to-be-detected object is measured;
and matching the first measured parameter with a first standard parameter stored in advance, and determining that the identification information corresponding to the first standard parameter matched with the first measured parameter is the identification information of the object to be detected.
In some embodiments, the identification information includes a model, and in the case that the object to be detected is identified to obtain the plurality of identification information of the object to be detected, the method further includes:
obtaining confidence values of the models which are real models of the object to be detected, arranging the models in a descending order according to the confidence values, and setting the model with the first N highest confidence values as a candidate model for the user to select, wherein N is a positive integer.
In some embodiments, the object to be detected carries identification information, and after obtaining confidence values that each model is a true model of the object to be detected, the models are arranged in a descending order according to the confidence values, and the model with the top N confidence values is set as a candidate model for the user to select, the method further includes:
obtaining the candidate model selected by the user;
and reading the identity identification information of the object to be detected, matching the candidate model selected by the user with the identity identification information, and determining whether the candidate model selected by the user is the real model of the object to be detected according to a matching result under the condition of successful matching.
In some embodiments, after the virtual model is displayed in an overlapping manner in a spatial area within a visual field of a user, and the item to be tested is marked on the virtual model displayed in the overlapping manner according to the position information of the item to be tested, the method further includes:
acquiring a second actual measurement parameter obtained after a second item to be detected of the object to be detected is measured;
acquiring a second standard parameter corresponding to the identification information and the second item to be tested from a pre-stored database;
and comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to the obtained comparison result.
In some embodiments, comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to the obtained comparison result includes:
acquiring a second actual measurement parameter obtained after the object to be detected is subjected to primary measurement;
judging whether the second measured parameter falls into the range of the second standard parameter;
if so, determining that the object to be detected is qualified;
if not, circularly executing the following steps until the third measured parameter and the second standard parameter are compared for preset times:
step A, generating prompt information for instructing the user to measure the object to be detected again;
step B, obtaining a third actual measurement parameter obtained after the object to be detected is measured again;
and C, judging whether the third actual measurement parameter falls into the range of the second standard parameter, if so, determining that the object to be detected is qualified to detect and exiting the cycle, and if not, returning to the step A.
In some embodiments, after comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to an obtained comparison result, the method further includes:
and storing the second measured parameter and information whether the object to be detected is qualified or not according to the identification information of the object to be detected.
In some embodiments, after the virtual model is displayed in an overlapping manner in a spatial area within a visual field of a user, and the item to be tested is marked on the virtual model displayed in the overlapping manner according to the position information of the item to be tested, the method further includes:
and shooting the measurement process of the user on the object to be detected in the space area where the visual field of the user reaches, and storing the shot image.
In a second aspect, there is provided in this embodiment an augmented reality based detection system, comprising: the AR equipment is in communication connection with the measuring device, the measuring device is used for measuring a to-be-measured item of an object to be measured and sending measured parameters obtained through measurement to the AR equipment, and the AR equipment is used for executing the augmented reality-based detection guiding method in the first aspect.
In a third aspect, in the present embodiment, there is provided an electronic apparatus, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the augmented reality based detection guidance method according to the first aspect.
In a fourth aspect, in the present embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of the augmented reality based detection guidance method according to the first aspect.
Compared with the related art, the detection guiding method, the detection guiding system, the detection guiding device and the storage medium based on the augmented reality solve the problem of low detection efficiency in the related art, and improve the detection efficiency.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a terminal of an augmented reality-based detection guidance method according to an embodiment of the present application;
fig. 2 is a flowchart of an augmented reality-based detection guidance method according to an embodiment of the present application;
FIG. 3 is an architecture diagram of an augmented reality based detection system according to an embodiment of the present application;
FIG. 4 is a flow chart of the operation of the augmented reality based detection system of the preferred embodiment of the present application;
fig. 5 is a schematic diagram of AR glasses displaying a virtual model in an overlaid manner according to the preferred embodiment of the present application.
Detailed Description
For a clearer understanding of the objects, aspects and advantages of the present application, reference is made to the following description and accompanying drawings.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having," and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or a similar computing device. For example, the method is executed on a terminal, and fig. 1 is a block diagram of a hardware structure of the terminal according to the augmented reality-based detection guidance method of the embodiment of the present application. As shown in fig. 1, the terminal may include one or more processors 102 (only one shown in fig. 1) and a memory 104 for storing data, wherein the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the terminal described above. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the augmented reality-based detection guidance method in the embodiment, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network described above includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, an augmented reality-based detection guidance method is provided, and fig. 2 is a flowchart of the augmented reality-based detection guidance method according to the embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, identifying the object to be detected to obtain the identification information of the object to be detected.
Identification means include, but are not limited to, radio frequency identification, two-dimensional code identification, image identification. For radio frequency identification, identification information can be obtained by reading an electronic tag carried by an object to be detected and identifying; for two-dimension code identification, identification information can be obtained by reading a two-dimension code label carried by an object to be detected; for image recognition, the object to be detected may be photographed, and the photographed image may be subjected to image processing to obtain the identification information through recognition.
Step S202, according to the identification information, the virtual model of the object to be detected, the item to be detected and the position information of the item to be detected are obtained from a pre-stored database.
The virtual model may be scaled according to the shape and structure of the object to be detected, or may be only roughly contoured. For example, the object to be detected is a lens having an aspect ratio of 2: 1, the virtual model may be a virtual toy car scaled according to a preset scale, or may be just one with an aspect ratio of 2: 1, virtual cuboid. The virtual model may be planar or stereoscopic, and may be static or dynamic.
The items to be measured comprise items for carrying out corresponding specification measurement on the whole and/or part of the object to be measured, and the specification comprises but is not limited to mass, size, area, volume, density and quantity. The position information includes position coordinates used to mark the item to be measured on the virtual model.
Step S203, displaying the virtual model in a spatial area where the user view is located in an overlapping manner, and labeling the item to be tested on the virtual model displayed in the overlapping manner according to the position information of the item to be tested.
By way of example and not limitation, a toy vehicle is an example of a virtual model having an aspect ratio of 2: the virtual cuboid comprises a virtual cuboid body 1, items to be measured are respectively measuring the whole length, the whole width and the whole height of the toy car, position information is position coordinates of three edges of the virtual cuboid body, arrows are marked on the three edges close to the virtual cuboid body when the virtual cuboid body is overlaid and displayed in a space area where a user views, and characters 'please measure the size' are marked below the arrows.
In the related art, the detection process requires a user to have certain operation experience to execute a detection task, namely, the user has work experience requirements, and when a certain product needs to measure a plurality of positions, the problems of error detection and missing detection are easy to occur. Through the steps of the embodiment, the object to be detected can be automatically identified, the detection guide information is generated based on the object to be detected, the detection guide information is visually displayed in front of the user, the function of an operation guide book is provided for the operation detection operation of the user, the user does not need to have work experience, the problems of false detection and missing detection are not easy to occur, the problem of low detection efficiency in the related technology is solved, and the detection efficiency is improved.
There are various ways to display the virtual model in a superimposed manner in the spatial region where the user's field of view is located, and three superimposed display ways are given below, respectively.
The first superposition display mode: and displaying the virtual model in a superposed manner in a preset area of a space area where the visual field of the user reaches.
The preset area may be fixed, such as the upper left corner of the spatial area within the user's field of view. The preset area may not be fixed, and may be changed according to the user habit.
And a second superposition display mode: the method comprises the steps of obtaining the actual position of an object to be detected in a space region where a user visual field reaches, adjusting the transparency value of a virtual model according to a preset threshold value, and superposing the virtual model with the transparency value adjusted on the object to be detected according to the actual position.
And carrying out target positioning tracking on the object to be detected to obtain the actual position of the object to be detected in the space region where the visual field of the user reaches, so as to provide a reference position for superposition of the virtual model and the object to be detected. After the virtual model is superposed on the object to be detected, the transparency value of the virtual model is adjusted on the premise of not shielding the object to be detected, so that a user can see the virtual model and the object to be detected simultaneously, and the instruction of detecting the guide information is stronger and more definite.
A third superposition display mode: and acquiring the position of the object to be detected in the space region where the user view reaches, and superposing the model in the space region where the user view reaches under the condition that the object to be detected is not shielded according to the actual position.
And carrying out target positioning tracking on the object to be detected to obtain the actual position of the object to be detected in the space region where the user view reaches so as to provide a reference position for the virtual model to avoid the object to be detected.
In some scenarios, the item to be detected of the same product is not constant, but needs to be detected in multiple stages, the item to be detected in each detection stage is different, and to solve the problem, before acquiring the virtual model of the object to be detected, the item to be detected, and the position information of the item to be detected from a pre-stored database according to the identification information, the method further includes:
acquiring a detection task, wherein the detection task carries information for measuring a preset item and a preset position of an object to be detected corresponding to the identification information;
and configuring the virtual model of the object to be detected, the item to be detected and the position information of the item to be detected corresponding to the identification information in a pre-stored database according to the detection task.
For example, but not limited to, after the last detection task completes the overall size detection of the toy car, the information carried by the current detection task indicates the tire diameter of the detected toy car, before the current detection task is started, the current detection task is obtained, the configuration is performed according to the indicating information carried by the current detection task, so as to obtain a virtual model, the virtual model is a virtual combination of a circle and a cuboid, wherein the circle is used for representing the tire of the toy car, the cuboid is used for representing the car body of the toy car, the item to be detected is used for measuring the tire diameter of the toy car, and the position information of the item to be detected is the position coordinate of the circle center of the circle.
Through the embodiment, the virtual model of the object to be detected, the item to be detected and the position information of the item to be detected can be flexibly configured, and a user can conveniently execute different detection tasks.
The above embodiments provide several different identification modes for identifying the object to be detected, but the radio frequency identification and the two-dimensional code identification rely on additional tags, which not only has high cost, but also may violate the design requirements of the object to be detected; image recognition can place high computational demands on the computer. In order to solve the problems, the identifying the object to be detected to obtain the identification information of the object to be detected comprises the following steps:
acquiring a first measured parameter obtained after a first item to be measured of an object to be detected is measured;
and matching the first measured parameter with a first standard parameter stored in advance, and determining that the identification information corresponding to the first standard parameter matched with the first measured parameter is the identification information of the object to be detected.
For example, but not limited to, the first item to be measured may be a mass of a toy car, the first measured parameter is an actual mass of the toy car, the first standard parameter is a standard mass of the toy car, the actual mass of the toy car and the standard mass of the toy car are matched, if a matching result is that the actual mass of the toy car and the standard mass of the toy car fall within a preset error range, matching is successful, and it is determined that identification information associated with the standard mass of the toy car is identification information of the toy car.
The first item to be measured may be the mass of the toy car, the overall size, the local size, and the floor area of the toy car, but the embodiment is not limited thereto.
Through the embodiment, the object to be detected is conveniently and quickly identified based on the inherent attribute of the object to be detected, the cost of the object to be detected does not need to be additionally increased, the design requirement of the object to be detected is violated, and larger computing resource overhead can not be caused to a computer.
In some cases, the identification information cannot be uniquely determined after the object to be detected is identified. For example, different models of toy vehicles have a similar specification, or different models of toy vehicles have a similar appearance, and the model of the toy vehicle cannot be uniquely determined whether it is identified by the specification or by image processing. In order to solve the problem, in some embodiments, the identification information includes a model, and in the case that the object to be detected is identified to obtain a plurality of identification information of the object to be detected, the method further includes:
obtaining confidence values of real models of objects to be detected in all models, arranging a plurality of models in a descending order according to the confidence values, and setting the model with the first N confidence values as a candidate model for a user to select, wherein N is a positive integer.
Taking the above example of identifying the object to be detected according to the first item to be detected as an example, in practical application, there is an error between the first measured parameter and the first standard parameter of the object to be detected, and the smaller the error is, the closer the object to be detected is to the model corresponding to the first standard parameter, so that, in the case of obtaining a plurality of models, the confidence value of each model can be determined according to the difference between the first measured parameter of the object to be detected and the first standard parameter corresponding to each model, and if the confidence value of a certain model is larger, the larger the possibility that the model is the true model of the object to be detected is. Wherein the first item to be measured may be quality.
Taking the image recognition as an example, when a plurality of recognition results are obtained after the target recognition is performed on the object to be detected, the output value of the previous layer network of each recognition result may be used as the confidence value.
In some embodiments, N may be set to 1, i.e., the model with the highest confidence value is set as the default model for the user to confirm the selection.
In order to avoid model reading errors, in some embodiments, the object to be detected carries identity identification information, after obtaining confidence values of the models which are real models of the object to be detected, arranging a plurality of models in descending order according to the confidence values, and setting the model with the first N confidence values as a candidate model for a user to select, the method further includes:
acquiring a candidate model selected by a user;
and reading the identity identification information of the object to be detected, matching the candidate model selected by the user with the identity identification information, and determining whether the candidate model selected by the user is the real model of the object to be detected according to the matching result under the condition of successful matching.
For example, the identity information and the model are a string of character strings, matching the candidate model and the identity information selected by the user comprises comparing the first characters of the identity information with the first characters of the model, if the first characters are consistent, the matching is successful, the candidate model selected by the user is determined to be the real model of the object to be detected, otherwise, the matching is failed, and the candidate model selected by the user is determined not to be the real model of the object to be detected.
In some embodiments, after the virtual model is displayed in a superimposed manner in a spatial area within a visual field of the user, and the item to be tested is marked on the virtual model displayed in the superimposed manner according to the position information of the item to be tested, the method further includes:
acquiring a second actual measurement parameter obtained after a second item to be detected of the object to be detected is measured;
acquiring a second standard parameter corresponding to the identification information and a second item to be tested from a pre-stored database;
and comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to the obtained comparison result.
The second item to be detected is different from the first item to be detected, the parameter measured by the first item to be detected is used for identifying the identification information of the object to be detected, and the second item to be detected is the target of the detection task. For example, the first item to be measured is to measure the mass of the toy car, the second item to be measured may be to measure the overall size of the toy car, after the overall actual size of the toy car is obtained by measurement, an overall standard size corresponding to the overall size of the identification information is obtained, then the overall actual size and the overall standard size are compared, and whether the toy car is qualified for detection is determined according to the comparison result.
In some embodiments, comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to the obtained comparison result includes:
acquiring a second actual measurement parameter obtained after the object to be detected is subjected to primary measurement;
judging whether the second measured parameter falls into the range of the second standard parameter;
if so, determining that the object to be detected is qualified;
if not, circularly executing the following steps until the third measured parameter and the second standard parameter are compared for the preset times:
step A, generating prompt information for instructing a user to measure an object to be detected again;
step B, obtaining a third actual measurement parameter obtained after the object to be detected is measured again;
and C, judging whether the third actual measurement parameter falls into the range of the second standard parameter, if so, determining that the object to be detected is qualified to detect and exiting the cycle, and if not, returning to the step A.
So set up, can get rid of the accidental error that appears in the testing process, promote the degree of accuracy that detects.
In some embodiments, after comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to the obtained comparison result, the method further includes:
and storing the second actual measurement parameter and information whether the object to be detected is qualified or not according to the identification information of the object to be detected.
By the arrangement, a user does not need to input the identification information of the object to be detected, the second actual measurement parameters and the information whether the object to be detected is qualified or not in the database, and a data input process is omitted.
In some embodiments, after the virtual model is displayed in a superimposed manner in a spatial area within a visual field of the user, and the item to be tested is marked on the virtual model displayed in the superimposed manner according to the position information of the item to be tested, the method further includes:
and shooting a measurement process of the object to be detected of the user in a space area reached by the visual field of the user, and storing a shot image.
The user uses certain measuring device to measure the object to be detected, and the operation action in the measuring process can be shot and recorded, so that the record of the detecting process is realized, and the traceability effect of the detecting process is achieved.
With reference to the detection guiding method based on augmented reality of the foregoing embodiment, this embodiment further provides a detection system based on augmented reality, including: the AR equipment is in communication connection with the measuring device, the measuring device is used for measuring a to-be-detected item of an object to be detected and sending measured parameters obtained through measurement to the AR equipment, and the AR equipment is used for executing the detection guiding method based on the augmented reality in any one of the embodiments.
The AR equipment and the measuring device establish communication connection through Bluetooth, ZigBee and WiFi. Measuring devices include, but are not limited to, mass measuring devices, dimensional measuring devices, area measuring devices, volume measuring devices, density measuring devices, and quantity measuring devices.
The detection and guidance method based on augmented reality has been described in the above embodiments, and the description of this embodiment is omitted.
Fig. 3 is an architecture diagram of an augmented reality-based detection system according to an embodiment of the present application, and as shown in fig. 3, the measurement apparatus includes a first measurement module 31, the AR device includes a wearable component (not shown), a first storage module 32, a control module 33, and a display screen 34, and the first storage module 32, the control module 33, and the display screen 34 are installed in the wearable component.
The wearable part is used for being worn on a user, and comprises glasses or a helmet;
the first measurement module 31 is configured to measure a first to-be-measured item of an object to be measured, and send a first measured parameter obtained by measurement to the control module 33;
the first storage module 32 is configured to store identification information of an object to be detected, a virtual model, an item to be detected, and position information of the item to be detected;
the control module 33 is configured to identify the object to be detected according to the first measured parameter, so as to obtain identification information of the object to be detected; acquiring a virtual model of an object to be detected, a project to be detected and position information of the project to be detected from a first storage module according to the identification information; and superposing and displaying the virtual model on the display screen 34, and marking the item to be detected on the superposed and displayed virtual model according to the position information of the item to be detected.
In some of these embodiments, referring to fig. 3, the measurement device further comprises a second measurement module 35.
The second measurement module 35 is configured to measure a second to-be-measured item of the to-be-measured object, obtain a second measured parameter, and send the second measured parameter to the control module 33;
the control module 33 is configured to obtain a second standard parameter corresponding to the identification information and the second item to be tested from the first storage module 32; and comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to the obtained comparison result.
In some embodiments, referring to fig. 3, the AR device further includes a second storage module 36, configured to store the second measured parameter and information about whether the object to be detected is qualified in detection according to the identification information of the object to be detected.
In some embodiments, the AR device further includes a shooting module 37, configured to shoot a measurement process of the object to be detected by the user in a spatial area within the field of view of the user, and store a shot image.
The detection system of the present embodiment is explained below by way of a preferred embodiment.
In a preferred embodiment, the detection system comprises AR glasses, a scale and a vernier caliper, and the AR glasses are respectively in communication connection with the scale and the vernier caliper through Bluetooth. Fig. 4 is a flowchart illustrating the operation of the augmented reality-based detection system according to the preferred embodiment of the present application, and as shown in fig. 4, the flowchart includes the following steps:
step S401, measuring the actual quality of a product by a scale;
step S402, the scale sends the actual mass of the product to the AR glasses;
step S403, the AR glasses acquire the model and the standard size of the product from the database according to the actual quality of the product;
step S404, the AR glasses read the virtual model of the product according to the model of the product, superimpose the virtual model on the display screen in an animation mode, and mark the prompt information of size measurement at the position needing to be measured;
step S405, measuring the actual size of the product by a vernier caliper;
step S406, the vernier caliper sends the actual size of the product to the AR glasses;
step S407, the AR glasses compare the actual size of the product with the standard size, if the actual size of the product is qualified, the measurement is passed, and the record is recorded; if the actual size does not fall into the range of the standard size, prompting a user to measure the actual size again, and if the M times of measurement are not passed, determining that the measurement is unqualified, wherein M is a positive integer;
and step S408, finishing the test of the AR glasses and uploading the data to the background server.
Fig. 5 is a schematic diagram of an AR glasses superimposed display virtual model according to a preferred embodiment of the present application, as shown in fig. 5, the virtual model of the product is a rectangular parallelepiped, the detection task is to measure the size of the product, an arrow is marked on one edge side of the rectangular parallelepiped model, and the text "please measure the size" is marked below the arrow.
There is also provided in this embodiment an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps in any of the above embodiments of the augmented reality based detection guidance method.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, identifying the object to be detected to obtain the identification information of the object to be detected;
s2, acquiring the virtual model of the object to be detected, the item to be detected and the position information of the item to be detected from a pre-stored database according to the identification information;
and S3, overlapping and displaying the virtual model in the space area where the visual field of the user reaches, and marking the item to be detected on the overlapped and displayed virtual model according to the position information of the item to be detected.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementations, and details are not described again in this embodiment.
In addition, in combination with the detection guidance method based on augmented reality provided in the above embodiment, a storage medium may also be provided to implement in this embodiment. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements the steps of any of the augmented reality based detection guidance methods of the above embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (12)

1. An augmented reality-based detection guidance method is characterized by comprising the following steps:
identifying an object to be detected to obtain identification information of the object to be detected;
acquiring a virtual model of the object to be detected, a project to be detected and position information of the project to be detected from a pre-stored database according to the identification information;
and displaying the virtual model in a superposed manner in a spatial area where a user view reaches, and marking the item to be detected on the virtual model displayed in a superposed manner according to the position information of the item to be detected.
2. The augmented reality-based detection guidance method according to claim 1, before acquiring the virtual model of the object to be detected, the item to be detected, and the position information of the item to be detected from a pre-stored database according to the identification information, the method further comprising:
acquiring a detection task, wherein the detection task carries information for measuring a preset item and a preset position of an object to be detected corresponding to the identification information;
and configuring a virtual model of the object to be detected, the item to be detected and the position information of the item to be detected corresponding to the identification information in a pre-stored database according to the detection task.
3. The augmented reality-based detection guidance method according to claim 1, wherein identifying an object to be detected to obtain identification information of the object to be detected comprises:
acquiring a first measured parameter obtained after a first to-be-measured item of the to-be-detected object is measured;
and matching the first measured parameter with a first standard parameter stored in advance, and determining that the identification information corresponding to the first standard parameter matched with the first measured parameter is the identification information of the object to be detected.
4. The augmented reality-based detection guidance method according to claim 1, wherein the identification information includes a model number, and in a case where the object to be detected is identified to obtain a plurality of identification information of the object to be detected, the method further includes:
obtaining confidence values of the models which are real models of the object to be detected, arranging the models in a descending order according to the confidence values, and setting the model with the first N highest confidence values as a candidate model for the user to select, wherein N is a positive integer.
5. The augmented reality-based detection guidance method according to claim 4, wherein the object to be detected carries identification information, and after obtaining confidence values of respective models that are true models of the object to be detected, arranging the models in descending order according to the confidence values, and setting the model with the top N confidence values as a candidate model for the user to select, the method further comprises:
obtaining the candidate model selected by the user;
and reading the identity identification information of the object to be detected, matching the candidate model selected by the user with the identity identification information, and determining whether the candidate model selected by the user is the real model of the object to be detected according to a matching result under the condition of successful matching.
6. The augmented reality-based detection guidance method according to claim 1, wherein after the virtual model is displayed in a superimposed manner in a spatial region within a visual field of a user, and the item to be detected is marked on the virtual model displayed in the superimposed manner according to the position information of the item to be detected, the method further comprises:
acquiring a second actual measurement parameter obtained after a second item to be detected of the object to be detected is measured;
acquiring a second standard parameter corresponding to the identification information and the second item to be tested from a pre-stored database;
and comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to the obtained comparison result.
7. The augmented reality-based detection guidance method according to claim 6, wherein comparing the second measured parameter with a second standard parameter stored in advance, and determining whether the object to be detected is qualified according to an obtained comparison result comprises:
acquiring a second actual measurement parameter obtained after the object to be detected is subjected to primary measurement;
judging whether the second measured parameter falls into the range of the second standard parameter;
if so, determining that the object to be detected is qualified;
if not, circularly executing the following steps until the third measured parameter and the second standard parameter are compared for preset times:
step A, generating prompt information for instructing the user to measure the object to be detected again;
step B, obtaining a third actual measurement parameter obtained after the object to be detected is measured again;
and C, judging whether the third actual measurement parameter falls into the range of the second standard parameter, if so, determining that the object to be detected is qualified to detect and exiting the cycle, and if not, returning to the step A.
8. The augmented reality-based detection guidance method according to claim 6, wherein after comparing the second measured parameter with a second standard parameter stored in advance and determining whether the object to be detected is qualified according to the obtained comparison result, the method further comprises:
and storing the second measured parameter and information whether the object to be detected is qualified or not according to the identification information of the object to be detected.
9. The augmented reality-based detection guidance method according to any one of claims 1 to 8, wherein after the virtual model is displayed in a superimposed manner in a spatial region within a visual field of a user and the item to be detected is marked on the virtual model displayed in the superimposed manner according to position information of the item to be detected, the method further comprises:
and shooting the measurement process of the user on the object to be detected in the space area where the visual field of the user reaches, and storing the shot image.
10. An augmented reality based detection system, comprising: the AR equipment is in communication connection with the measuring device, the measuring device is used for measuring an item to be measured of an object to be detected and sending measured parameters obtained through measurement to the AR equipment, and the AR equipment is used for executing the augmented reality-based detection guiding method according to any one of claims 1 to 9.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the augmented reality based detection guidance method of any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the augmented reality based detection guidance method of any one of claims 1 to 9.
CN202110440566.6A 2021-04-23 2021-04-23 Detection guiding method, system, device and storage medium based on augmented reality Pending CN113160423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110440566.6A CN113160423A (en) 2021-04-23 2021-04-23 Detection guiding method, system, device and storage medium based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110440566.6A CN113160423A (en) 2021-04-23 2021-04-23 Detection guiding method, system, device and storage medium based on augmented reality

Publications (1)

Publication Number Publication Date
CN113160423A true CN113160423A (en) 2021-07-23

Family

ID=76870086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110440566.6A Pending CN113160423A (en) 2021-04-23 2021-04-23 Detection guiding method, system, device and storage medium based on augmented reality

Country Status (1)

Country Link
CN (1) CN113160423A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110248083A1 (en) * 2010-03-12 2011-10-13 Sunrise R&D Holdings, Llc System and method for product identification
CN109189213A (en) * 2018-08-15 2019-01-11 华中科技大学 A kind of assembling process of products augmented reality guidance method based on movable computer
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN111970557A (en) * 2020-09-01 2020-11-20 深圳市慧鲤科技有限公司 Image display method, image display device, electronic device, and storage medium
CN112114669A (en) * 2020-09-07 2020-12-22 南京智导智能科技有限公司 Machine part machining precision detection guide system based on augmented reality
CN112230765A (en) * 2020-09-29 2021-01-15 杭州灵伴科技有限公司 AR display method, AR display device, and computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110248083A1 (en) * 2010-03-12 2011-10-13 Sunrise R&D Holdings, Llc System and method for product identification
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN109189213A (en) * 2018-08-15 2019-01-11 华中科技大学 A kind of assembling process of products augmented reality guidance method based on movable computer
CN111970557A (en) * 2020-09-01 2020-11-20 深圳市慧鲤科技有限公司 Image display method, image display device, electronic device, and storage medium
CN112114669A (en) * 2020-09-07 2020-12-22 南京智导智能科技有限公司 Machine part machining precision detection guide system based on augmented reality
CN112230765A (en) * 2020-09-29 2021-01-15 杭州灵伴科技有限公司 AR display method, AR display device, and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林瑞宗等: "基于AR空间测量技术的变电工程竣工验收研究", 《现代信息科技》 *

Similar Documents

Publication Publication Date Title
CN103518393B (en) The system and method for detecting mobile communication equipment content
US20170091844A1 (en) Online clothing e-commerce systems and methods with machine-learning based sizing recommendation
US11557080B2 (en) Dynamically modeling an object in an environment from different perspectives
CN104951355B (en) The method and apparatus of recognition application virtual execution environment
CN106327546B (en) Method and device for testing face detection algorithm
CN107871011A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN110471376A (en) A kind of industry spot fault detection method and equipment
CN109934093A (en) A kind of method, computer-readable medium and identifying system identifying commodity on shelf
CN104698919A (en) Method and device for inspecting intelligent terminal
US20140019303A1 (en) Comparison of Product Information
CN109754209A (en) Kinds of goods placement area determines method and device
CN112508109B (en) Training method and device for image recognition model
US20180261001A1 (en) Integration of 3d models
CN109633529B (en) Detection equipment, method and device for positioning accuracy of positioning system
CN109740567A (en) Key point location model training method, localization method, device and equipment
CN108255651A (en) A kind of method, terminal and the storage medium of terminal detection
CN107493469A (en) A kind of method and device of the area-of-interest of determination SFR test cards
CN109102324A (en) Model training method, the red packet material based on model are laid with prediction technique and device
CN107403216A (en) A kind of identification code generation and verification method and device
CN113160423A (en) Detection guiding method, system, device and storage medium based on augmented reality
CN109074203A (en) The coded data in capacitor label
CN116415862A (en) Freight information processing method and system
CN114355175A (en) Chip performance evaluation method and device, storage medium and computer equipment
CN114360057A (en) Data processing method and related device
CN107734324A (en) A kind of measuring method, system and the terminal device of flash lamp illuminance uniformity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination