CN112034985A - Augmented reality image display method, device, equipment and storage medium - Google Patents

Augmented reality image display method, device, equipment and storage medium Download PDF

Info

Publication number
CN112034985A
CN112034985A CN202010906391.9A CN202010906391A CN112034985A CN 112034985 A CN112034985 A CN 112034985A CN 202010906391 A CN202010906391 A CN 202010906391A CN 112034985 A CN112034985 A CN 112034985A
Authority
CN
China
Prior art keywords
preset
similarity
vector
augmented reality
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010906391.9A
Other languages
Chinese (zh)
Inventor
张峥超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202010906391.9A priority Critical patent/CN112034985A/en
Publication of CN112034985A publication Critical patent/CN112034985A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of intelligent education in artificial intelligence and discloses a similarity value-based augmented reality image display method, a similarity value-based augmented reality image display device, equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a muscle state signal sent by the wearable device; judging whether the muscle state signal is relaxed; collecting a current environment picture; calling a plurality of scene pictures from a preset scene picture database, and calculating the similarity between the current environment picture and the scene picture; acquiring a designated contrast video corresponding to the maximum similarity value; collecting a real-time video, and performing integrated processing on the real-time video and the specified comparison video for displaying; the scheme of this application can be when the user is not in operating condition according to its operational environment automatic recommendation relevant contrast video to integrate it with real-time video, in order to obtain augmented reality image, thereby realized the reuse of piece time, need not the manual selection of user and learn the video, improve the learning effect.

Description

Augmented reality image display method, device, equipment and storage medium
Technical Field
The application relates to the field of intelligent education of artificial intelligence, in particular to a similarity value-based augmented reality image display method, device and equipment.
Background
With the rapid development of the internet, video teaching through the network is more and more common. However, the existing learning video needs to be played only by a user, and the learning video is separated from an actual application scene, so that the learning effect of the user is poor. Therefore, the conventional scheme cannot automatically play the learning video required by the user in a targeted manner, and the learning effect achieved by the learning video is insufficient.
Disclosure of Invention
The application mainly aims to provide a similarity value-based augmented reality image display method, a similarity value-based augmented reality image display device and computer equipment, and aims to solve the problems that the learning videos required by users cannot be automatically played in a targeted manner in the conventional network video teaching at present, and the learning effect is poor.
In order to achieve the above object, the present application provides an augmented reality image display method based on a similarity value, which is applied to an augmented reality terminal, wherein the augmented reality terminal is in signal connection with a preset wearable device, the wearable device is sleeved on an arm of a user when in use, and the wearable device can detect a muscle state of the arm of the user, and the augmented reality image display method based on the similarity value includes:
acquiring a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation;
judging whether the muscle state signal is relaxed;
if the muscle state signal is relaxed, acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal;
calling a plurality of scene pictures from a preset scene picture database, and calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method so as to obtain a plurality of similarity values respectively corresponding to the plurality of scene pictures;
selecting a maximum similarity value from the similarity values, and judging whether the maximum similarity value is greater than a preset similarity threshold value;
if the maximum similarity value is larger than a preset similarity threshold value, acquiring a specified contrast video corresponding to the maximum similarity value according to the corresponding relation among the preset similarity value, the scene picture and the contrast video;
acquiring a real-time video by using a second camera preset on the augmented reality terminal, performing integrated processing on the real-time video and the specified contrast video by using a preset augmented reality instruction, and displaying an augmented reality image obtained through the integrated processing on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera.
Further, a first pattern drawn by a light conversion material exists in the current environment corresponding to the first camera, the light conversion material can convert infrared light into green visible light, and a second pattern of green is preset in the scene picture; the step of adopting a first camera preset on the augmented reality terminal to acquire a current environment picture comprises the following steps:
starting a preset infrared flash lamp, and acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal, so that a green first pattern exists in the current environment picture;
and identifying a first pattern from the current environment picture according to a preset pattern identification method.
Further, the step of calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method includes:
identifying a second pattern from the scene picture according to a preset pattern identification method;
and comparing the first pattern with the second pattern to obtain the pattern similarity between the first pattern and the second pattern, and taking the pattern similarity as the similarity between the current environment picture and the scene picture.
Further, the step of retrieving the plurality of scene pictures from the preset scene picture database includes:
acquiring a schedule of the user;
inquiring the schedule list to obtain the current working content corresponding to the current time;
searching the current working content for a designated field in a preset scene picture database so as to acquire a plurality of scene pictures corresponding to the current working content; wherein, the appointed field of the scene picture in the scene picture database is recorded with the work content information in advance.
Further, after the step of selecting the maximum similarity value from the similarity values and determining whether the maximum similarity value is greater than a preset similarity threshold value, the method includes:
if the maximum similarity value is not greater than a preset similarity threshold value, acquiring personal information data of the user, and mapping the personal information data of the user into a first vector according to a preset vector mapping method; wherein the personal information data includes at least professional information;
calling pre-collected personal information data of a reference object, and mapping the personal information data of the reference object into a second vector according to a preset vector mapping method; wherein the reference object is the same as the professional information of the user, and the personal information data of the reference contrast corresponds to a preset learning video;
calculating a vector similarity value of the first vector and the second vector according to a preset vector similarity calculation method;
judging whether the vector similarity value is larger than a preset vector similarity threshold value or not;
and if the vector similarity value is larger than a preset vector similarity threshold value, displaying the learning video on a screen of the augmented reality terminal.
Further, the step of calculating the vector similarity value between the first vector and the second vector according to a preset vector similarity calculation method includes:
according to the formula:
Figure BDA0002661615460000031
and calculating a vector similarity value X of the first vector and the second vector, wherein M is the first vector, N is the second vector, Mi is the ith component vector of the first vector, Ni is the ith component vector of the second vector, and the first vector and the second vector have p component vectors.
The embodiment of the present application further provides an augmented reality image display device based on similarity value, include:
the signal acquisition unit is used for acquiring a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation;
the judging unit is used for judging whether the muscle state signal is relaxed or not;
the acquisition unit is used for acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal if the muscle state signal is relaxed;
the calculation unit is used for calling a plurality of scene pictures from a preset scene picture database, and calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method so as to obtain a plurality of similarity values respectively corresponding to the scene pictures;
the selecting unit is used for selecting a maximum similarity value from the similarity values and judging whether the maximum similarity value is larger than a preset similarity threshold value or not;
a comparison video obtaining unit, configured to obtain, if the maximum similarity value is greater than a preset similarity threshold, a specified comparison video corresponding to the maximum similarity value according to a correspondence between a preset similarity value, a scene picture, and a comparison video;
the display unit is used for acquiring a real-time video by using a second camera preset on the augmented reality terminal, performing integrated processing on the real-time video and the specified contrast video by using a preset augmented reality instruction, and displaying an augmented reality image obtained through the integrated processing on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera.
Further, a first pattern drawn by a light conversion material exists in the current environment corresponding to the first camera, the light conversion material can convert infrared light into green visible light, and a second pattern of green is preset in the scene picture; the acquisition unit includes:
the starting unit is used for starting a preset infrared flash lamp and acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal, so that a green first pattern exists in the current environment picture;
and the first identification unit is used for identifying a first pattern from the current environment picture according to a preset pattern identification method.
The present application further provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of any of the above methods when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any of the above.
According to the method, the device and the equipment for displaying the augmented reality image based on the similarity value, the relevant contrast video can be automatically recommended according to the working environment of a user when the user is not in the working state, and the contrast video and the real-time video are integrated to obtain the augmented reality image, so that the fragment time is recycled, the user does not need to manually select the learning video, and the learning effect is improved.
Drawings
Fig. 1 is a schematic flowchart illustrating an augmented reality image displaying method based on similarity values according to an embodiment of the present disclosure;
FIG. 2 is a block diagram illustrating an embodiment of an augmented reality image display apparatus based on similarity values;
fig. 3 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, an augmented reality image display method based on similarity value is provided in the embodiment of this application, can be applied to the wisdom education field in the artificial intelligence, is applied to augmented reality terminal, augmented reality terminal and predetermined wearing equipment signal connection, wearing equipment overlaps when using and establishes on user's arm, and wearing equipment can detect the muscle state of user's arm, augmented reality image display method based on similarity value includes:
s1, acquiring a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation;
s2, judging whether the muscle state signal is relaxed;
s3, if the muscle state signal is relaxed, acquiring a current environment picture by using a first camera preset on the augmented reality terminal;
s4, calling a plurality of scene pictures from a preset scene picture database, and calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method, so as to obtain a plurality of similarity values respectively corresponding to the plurality of scene pictures;
s5, selecting the maximum similarity value from the similarity values, and judging whether the maximum similarity value is larger than a preset similarity threshold value;
s6, if the maximum similarity value is larger than a preset similarity threshold value, acquiring a specified contrast video corresponding to the maximum similarity value according to the corresponding relation among the preset similarity value, the scene picture and the contrast video;
s7, acquiring a real-time video by using a second camera preset on the augmented reality terminal, performing integrated processing on the real-time video and the specified contrast video by using a preset augmented reality instruction, and displaying an augmented reality image obtained through the integrated processing on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera.
As described in step S1, obtaining a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation. Wherein, wearing equipment overlaps when using and establishes on user's arm, and wearing equipment can detect the muscle state of user's arm. The wearable device can be any feasible device, such as a bracelet-shaped wearable device that is tightly attached to the arm, and can detect the muscle state. The muscle state sensor may be any feasible sensor, such as a pressure sensor. Because the pressure applied to the outside when the muscle is tensed and relaxed is different (namely, the pressure is directed to the direction outside the skin), the muscle state can be known by utilizing the pressure signal sensed by the pressure sensor.
As described in step S2, it is determined whether the muscle state signal is relaxed. When the muscle state signal is relaxed, the muscle state sensor (for example, a pressure sensor) senses that the signal value is small, and when the signal value is smaller than a preset threshold value and is kept for a preset time, the muscle state is determined to be relaxed.
As described in step S3, if the muscle state signal is relaxed, a current environment picture is collected by using a first camera preset on the augmented reality terminal. It should be noted that, in the present application, "the wearable device is sleeved on the arm of the user when in use," and "determining whether the muscle state signal is relaxed" is an integrated feature, and is a special design, aiming at identifying whether a special user is in a working state. The user needs to use his arm while working, for example, a machine worker or the like. If the muscle state signal is relaxed, the user is in a non-working state. At the moment, a first camera preset on the augmented reality terminal is adopted to collect the current environment picture. And the current environment picture is used as a selection basis of a subsequent comparison video.
As described in step S4, a plurality of scene pictures are called from a preset scene picture database, and the similarity between the current environment picture and the scene picture is calculated according to a preset picture similarity calculation method, so as to obtain a plurality of similarity values respectively corresponding to the plurality of scene pictures. Each scene picture corresponds to a control video. The scene picture may be obtained in any feasible manner, for example, a frame of picture is extracted from the comparison video as the scene picture. The image similarity calculation method can adopt any feasible method, such as calculating the image similarity by using an image contrast model based on a convolutional neural network model.
As described in step S5, the maximum similarity value is selected from the similarity values, and it is determined whether the maximum similarity value is greater than a preset similarity threshold. The scene picture corresponding to the maximum similarity value is most similar to the current environment, but the possibility of misjudgment still exists. Therefore, the method and the device avoid the occurrence of the erroneous judgment phenomenon by judging whether the maximum similarity value is larger than the preset similarity threshold value.
As described in step S6, if the maximum similarity value is greater than the preset similarity threshold, the designated comparison video corresponding to the maximum similarity value is obtained according to the preset correspondence between the similarity value, the scene picture, and the comparison video. If the maximum similarity value is larger than a preset similarity threshold value, it is indicated that a scene picture similar to the current environment is found, and further, according to the corresponding relation among the preset similarity value, the scene picture and the comparison video, the appointed comparison video corresponding to the maximum similarity value can be obtained. It should be emphasized that the designated contrast video of the present application is a special video, and the special point is that the scene of the designated contrast video is the same as the current environment, so that the images obtained by integrating with the augmented reality technology have a very strong contrast effect, and therefore have an obvious learning enhancement effect. Based on this property, the present application is particularly suitable for inexperienced or less experienced workers (i.e., learning time of new skills can be shortened by using images with strong contrast effect).
As described in step S7, a second camera preset on the augmented reality terminal is used to collect a real-time video, a preset augmented reality instruction is used to perform integrated processing on the real-time video and the designated comparison video, and an augmented reality image obtained through the integrated processing is displayed on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera. The augmented reality technology employed in the present application may be any feasible technology, such as a computer display-based augmented reality technology, a video see-through augmented reality technology, and the like. Therefore, the augmented reality image obtained by the integrated processing comprises the object (such as a machine tool) in the actual environment and the content (such as the operation instruction, the reminding content and the like of the machine tool) in the comparison video, and the learning effect of the user is improved. In addition, the resolution ratio of the second camera adopted by the method is higher than that of the first camera, so that the video is clearer, and the overall calculation resource consumption is lower. Further, the integration of the real-time video and the designated comparison video can be performed in any feasible manner, for example, a double-layer image overlapping manner is adopted, the real-time video is used as a bottom layer, the designated comparison video is used as a top layer, and the integration process is completed after the bottom layer and the top layer are overlapped; or, a split-screen integration mode is adopted, namely, the screen is divided into a left half and a right half, the real-time video is displayed on the left screen, the comparison video is appointed to be displayed on the right screen as a related object of the real-time video, and therefore the integration process is completed.
In one embodiment, a first pattern drawn by a light conversion material exists in the current environment corresponding to the first camera, the light conversion material can convert infrared light into green visible light, and a second pattern of green is preset in the scene picture; adopt step S3 of the first camera collection current environment picture of predetermineeing on the augmented reality terminal includes:
s301, starting a preset infrared flash lamp, and acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal, so that a green first pattern exists in the current environment picture;
s302, identifying a first pattern from the current environment picture according to a preset pattern identification method;
the step S4 of calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method includes:
s401, identifying a second pattern from the scene picture according to a preset pattern identification method;
s402, comparing the first pattern with the second pattern to obtain pattern similarity between the first pattern and the second pattern, and taking the pattern similarity as the similarity between the current environment picture and the scene picture.
As described above, the current environment picture is acquired by using the first camera preset on the augmented reality terminal, and the similarity between the current environment picture and the scene picture is calculated, and the picture similarity calculation method may use any feasible method, for example, a convolution neural network model-based picture contrast model is used to calculate the picture similarity. Wherein the light conversion material can be any feasible materialHowever, it is required to satisfy the condition of converting infrared light into green visible light. The light conversion material is highly doped Er3+The plasma oxyfluoride compound can convert infrared photons into visible light in a green spectrum on the premise of infrared light irradiation, so that the infrared light can be converted into green visible light. In order to reduce the computing resources consumed when the similarity between the current environment picture and the scene picture is reduced, a special design is adopted. Specifically, the method includes the steps that a first pattern is drawn in advance by a light conversion material in the current environment (for example, the first pattern is drawn on a machine tool in the current environment), and the first pattern is not different from other parts of the current environment in a normal state (the use of the first pattern in the normal state is not influenced), but appears green under the condition of infrared light, so that the red first pattern exists in a current environment picture; correspondingly, a second pattern of green is present in the scene picture. Therefore, the similarity can be obtained by only comparing the first pattern with the second pattern. Therefore, the comparison of the complex full pictures is converted into the comparison of the selected patterns, but the accuracy of identification is not influenced, the complexity of calculation is reduced, and the calculation resources are saved. The first pattern can be any feasible pattern, such as a flower pattern and the like, and the first pattern is set to rapidly identify an accurate scene; the drawn second pattern is a scene picture of the flower pattern, and the second pattern is a scene picture corresponding to the first pattern, so that an accurate comparison video can be quickly found according to the corresponding relation of the preset similarity value, the scene picture and the comparison video.
In one embodiment, the step S4 of retrieving the multiple scene pictures from the preset scene picture database includes:
s411, acquiring a schedule list of the user;
s412, inquiring the schedule list to obtain the current working content corresponding to the current time;
s413, searching the current working content for a designated field in a preset scene picture database, so as to obtain a plurality of scene pictures corresponding to the current working content; wherein, the appointed field of the scene picture in the scene picture database is recorded with the work content information in advance.
As described above, it is realized that a plurality of scene pictures are called from a preset scene picture database. The retrieval of the scene picture has a significant impact on subsequent similarity calculations. If the called scene pictures are very accurate and the number of the scene pictures is small, subsequent similar calculation is easier and more time-saving. The method comprises the steps of obtaining a schedule of a user; inquiring the schedule list to obtain the current working content corresponding to the current time; searching the current working content for the appointed field in the preset scene picture database so as to obtain a plurality of scene pictures corresponding to the current working content, so that the called scene pictures are related to the working content of the user at the current time, thereby realizing preliminary screening and calling out a plurality of proper scene pictures. Meanwhile, by adopting the design, the fragment time of the user in the working interval (namely, the contrast video is related to the current working content and is integrated and displayed by the augmented reality technology in the working interval) can be fully utilized, so that the learning effect is improved.
In one embodiment, after the step S5 of selecting the maximum similarity value from the similarity values and determining whether the maximum similarity value is greater than a preset similarity threshold, the method includes:
s51, if the maximum similarity value is not larger than a preset similarity threshold value, acquiring personal information data of the user, and mapping the personal information data of the user into a first vector according to a preset vector mapping method; wherein the personal information data includes at least professional information;
s52, calling the pre-collected personal information data of the reference object, and mapping the personal information data of the reference object into a second vector according to a preset vector mapping method; wherein the reference object is the same as the professional information of the user, and the personal information data of the reference contrast corresponds to a preset learning video;
s53, calculating a vector similarity value of the first vector and the second vector according to a preset vector similarity calculation method;
s54, judging whether the vector similarity value is larger than a preset vector similarity threshold value;
and S55, if the vector similarity value is larger than a preset vector similarity threshold value, displaying the learning video on a screen of the augmented reality terminal.
As mentioned above, the learning video is displayed on the screen of the augmented reality terminal. If the maximum similarity value is not greater than the preset similarity threshold value, it indicates that all the called scene pictures are not in accordance with the current environment, and therefore the comparison video corresponding to the scene pictures is not applicable. At this time, in order to display an applicable learning video, the personal information data of the user is acquired, and is mapped into a first vector according to a preset vector mapping method; calling pre-collected personal information data of a reference object, and mapping the personal information data of the reference object into a second vector according to a preset vector mapping method; calculating a vector similarity value of the first vector and the second vector; if the vector similarity value is larger than a preset vector similarity threshold value, displaying the learning video on a screen of the augmented reality terminal in a mode of displaying the learning video, and displaying the learning video applicable to the reference object similar to the user, so that the pertinence and the accuracy of learning video display are guaranteed.
In one embodiment, the step S53 of calculating the vector similarity value between the first vector and the second vector according to a preset vector similarity calculation method includes:
s531, according to a formula:
Figure BDA0002661615460000101
and calculating a vector similarity value X of the first vector and the second vector, wherein M is the first vector, N is the second vector, Mi is the ith component vector of the first vector, Ni is the ith component vector of the second vector, and both the first vector and the second vector have p component vectors.
As described above, a vector similarity value of the first vector and the second vector is calculated. The application adopts a special formula:
Figure BDA0002661615460000111
similarity values between the vectors are calculated. The above formula not only considers the numerical difference between the vectors, but also considers the angle difference between the vectors, thereby improving the calculation accuracy of the vector similarity. Wherein a larger numerical value of the vector similarity value X indicates that the first vector is more similar to the second vector; conversely, it indicates that the more dissimilar the first vector is to the second vector. The maximum value of the vector similarity value X is 1, that is, when the vector similarity value X is 1, the first vector is most similar to the second vector.
According to the augmented reality image display method based on the similarity value, whether a user is in a working state or not is determined through the wearable device, the adaptive contrast video (namely the learning video) is obtained when the user is not in the working state, and the adaptive contrast video and the real-time video are integrated to obtain the augmented reality image, so that the fragment time is recycled, the user does not need to manually select, and the learning effect is improved.
Referring to fig. 2, an embodiment of the present application further provides an augmented reality image display apparatus based on a similarity value, including:
the signal acquisition unit 1 is used for acquiring a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation;
a judging unit 2, configured to judge whether the muscle state signal is relaxed;
the acquisition unit 3 is used for acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal if the muscle state signal is relaxed;
the calculating unit 4 is configured to call a plurality of scene pictures from a preset scene picture database, and calculate, according to a preset picture similarity calculating method, a similarity between the current environment picture and the scene picture, so as to obtain a plurality of similarity values respectively corresponding to the plurality of scene pictures;
a selecting unit 5, configured to select a maximum similarity value from the multiple similarity values, and determine whether the maximum similarity value is greater than a preset similarity threshold;
a comparison video obtaining unit 6, configured to obtain, if the maximum similarity value is greater than a preset similarity threshold, a specified comparison video corresponding to the maximum similarity value according to a preset correspondence between the similarity value, a scene picture, and a comparison video;
the display unit 7 is configured to collect a real-time video by using a second camera preset on the augmented reality terminal, perform integrated processing on the real-time video and the specified comparison video by using a preset augmented reality instruction, and display an augmented reality image obtained through the integrated processing on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera.
In one embodiment, a first pattern drawn by a light conversion material exists in the current environment corresponding to the first camera, the light conversion material can convert infrared light into green visible light, and a second pattern of green is preset in the scene picture; the acquisition unit 3 includes:
the starting unit is used for starting a preset infrared flash lamp and acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal, so that a green first pattern exists in the current environment picture;
and the first identification unit is used for identifying a first pattern from the current environment picture according to a preset pattern identification method.
In one embodiment, the calculation unit comprises:
the second identification unit is used for identifying a second pattern from the scene picture according to a preset pattern identification method;
and the comparison unit is used for comparing the first pattern with the second pattern so as to obtain the pattern similarity between the first pattern and the second pattern, and the pattern similarity is used as the similarity between the current environment picture and the scene picture.
In one embodiment, the calculation unit comprises:
the schedule acquisition unit is used for acquiring a schedule of the user;
the query unit is used for querying the schedule list to obtain the current working content corresponding to the current time;
the searching unit is used for searching the current working content for the specified field in the preset scene picture database so as to acquire a plurality of scene pictures corresponding to the current working content; wherein, the appointed field of the scene picture in the scene picture database is recorded with the work content information in advance.
In one embodiment, the apparatus for displaying augmented reality image based on similarity value further comprises:
a personal information obtaining unit, configured to obtain personal information data of the user if the maximum similarity value is not greater than a preset similarity threshold, and map the personal information data of the user into a first vector according to a preset vector mapping method; wherein the personal information data includes at least professional information;
the vector mapping unit is used for calling the pre-collected personal information data of the reference object and mapping the personal information data of the reference object into a second vector according to a preset vector mapping method; wherein the reference object is the same as the professional information of the user, and the personal information data of the reference contrast corresponds to a preset learning video;
the similarity calculation unit is used for calculating a vector similarity value of the first vector and the second vector according to a preset vector similarity calculation method;
the similarity judging unit is used for judging whether the vector similarity value is larger than a preset vector similarity threshold value or not;
and the learning video display unit is used for displaying the learning video on the screen of the augmented reality terminal if the vector similarity value is greater than a preset vector similarity threshold value.
As described above, it can be understood that each component of the augmented reality image display apparatus based on the similarity value provided in the present application can implement any function of the augmented reality image display method based on the similarity value, and the detailed structure is not repeated.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used for storing data such as preset scene pictures, preset picture similarity calculation methods and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an augmented reality image display method based on similarity values. The augmented reality image display method based on the similarity value comprises the following steps: acquiring a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation; judging whether the muscle state signal is relaxed; if the muscle state signal is relaxed, acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal; calling a plurality of scene pictures from a preset scene picture database, and calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method so as to obtain a plurality of similarity values respectively corresponding to the plurality of scene pictures; selecting a maximum similarity value from the similarity values, and judging whether the maximum similarity value is greater than a preset similarity threshold value; if the maximum similarity value is larger than a preset similarity threshold value, acquiring a specified contrast video corresponding to the maximum similarity value according to the corresponding relation among the preset similarity value, the scene picture and the contrast video; acquiring a real-time video by using a second camera preset on the augmented reality terminal, performing integrated processing on the real-time video and the specified contrast video by using a preset augmented reality instruction, and displaying an augmented reality image obtained through the integrated processing on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for displaying an augmented reality image based on a similarity value is implemented, including: acquiring a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation; judging whether the muscle state signal is relaxed; if the muscle state signal is relaxed, acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal; calling a plurality of scene pictures from a preset scene picture database, and calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method so as to obtain a plurality of similarity values respectively corresponding to the plurality of scene pictures; selecting a maximum similarity value from the similarity values, and judging whether the maximum similarity value is greater than a preset similarity threshold value; if the maximum similarity value is larger than a preset similarity threshold value, acquiring a specified contrast video corresponding to the maximum similarity value according to the corresponding relation among the preset similarity value, the scene picture and the contrast video; acquiring a real-time video by using a second camera preset on the augmented reality terminal, performing integrated processing on the real-time video and the specified contrast video by using a preset augmented reality instruction, and displaying an augmented reality image obtained through the integrated processing on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera.
According to the executed augmented reality image display method based on the similarity value, whether a user is in a working state or not is determined through the wearable device, the adaptive contrast video (namely the learning video) is obtained when the user is not in the working state, and the adaptive contrast video and the real-time video are integrated to obtain the augmented reality image, so that the fragment time is recycled, the user does not need to manually select, and the learning effect is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. The utility model provides an augmented reality image display method based on similarity value, is applied to augmented reality terminal, augmented reality terminal and predetermined wearing equipment signal connection, wearing equipment overlaps when using and establishes on user's arm, and wearing equipment can detect the muscle state of user's arm, its characterized in that, augmented reality image display method based on similarity value includes:
acquiring a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation;
judging whether the muscle state signal is relaxed;
if the muscle state signal is relaxed, acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal;
calling a plurality of scene pictures from a preset scene picture database, and calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method so as to obtain a plurality of similarity values respectively corresponding to the plurality of scene pictures;
selecting a maximum similarity value from the similarity values, and judging whether the maximum similarity value is greater than a preset similarity threshold value;
if the maximum similarity value is larger than a preset similarity threshold value, acquiring a specified contrast video corresponding to the maximum similarity value according to the corresponding relation among the preset similarity value, the scene picture and the contrast video;
acquiring a real-time video by using a second camera preset on the augmented reality terminal, performing integrated processing on the real-time video and the specified contrast video by using a preset augmented reality display instruction, and displaying an augmented reality image obtained through the integrated processing on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera.
2. The method for displaying augmented reality images based on similarity value according to claim 1, wherein a first pattern drawn by a light conversion material exists in the current environment corresponding to the first camera, the light conversion material can convert infrared light into green visible light, and a second pattern of green is preset in the scene picture; the step of adopting a first camera preset on the augmented reality terminal to acquire a current environment picture comprises the following steps:
starting a preset infrared flash lamp, and acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal, so that a green first pattern exists in the current environment picture;
and identifying a first pattern from the current environment picture according to a preset pattern identification method.
3. The method as claimed in claim 2, wherein the step of calculating the similarity between the current environmental picture and the scene picture according to a predetermined picture similarity calculation method comprises:
identifying a second pattern from the scene picture according to a preset pattern identification method;
and comparing the first pattern with the second pattern to obtain the pattern similarity between the first pattern and the second pattern, and taking the pattern similarity as the similarity between the current environment picture and the scene picture.
4. The method as claimed in claim 1, wherein the step of retrieving the plurality of scene pictures from the preset scene picture database comprises:
acquiring a schedule of the user;
inquiring the schedule list to obtain the current working content corresponding to the current time;
searching the current working content for a designated field in a preset scene picture database so as to acquire a plurality of scene pictures corresponding to the current working content; wherein, the appointed field of the scene picture in the scene picture database is recorded with the work content information in advance.
5. The method as claimed in claim 1, wherein the step of selecting a maximum similarity value from the similarity values and determining whether the maximum similarity value is greater than a predetermined similarity threshold value comprises:
if the maximum similarity value is not greater than a preset similarity threshold value, acquiring personal information data of the user, and mapping the personal information data of the user into a first vector according to a preset vector mapping method; wherein the personal information data includes at least professional information;
calling pre-collected personal information data of a reference object, and mapping the personal information data of the reference object into a second vector according to a preset vector mapping method; wherein the reference object is the same as the professional information of the user, and the personal information data of the reference contrast corresponds to a preset learning video;
calculating a vector similarity value of the first vector and the second vector according to a preset vector similarity calculation method;
judging whether the vector similarity value is larger than a preset vector similarity threshold value or not;
and if the vector similarity value is larger than a preset vector similarity threshold value, displaying the learning video on a screen of the augmented reality terminal.
6. The method as claimed in claim 5, wherein the step of calculating the vector similarity between the first vector and the second vector according to a predetermined vector similarity calculation method comprises:
according to the formula:
Figure FDA0002661615450000031
and calculating a vector similarity value X of the first vector and the second vector, wherein M is the first vector, N is the second vector, Mi is the ith component vector of the first vector, Ni is the ith component vector of the second vector, and the first vector and the second vector have p component vectors.
7. An augmented reality image display device based on similarity value, comprising:
the signal acquisition unit is used for acquiring a muscle state signal sent by the wearable device; the muscle state signal is obtained by sensing through a preset muscle state sensor on the wearable device, and the muscle state signal comprises tension or relaxation;
the judging unit is used for judging whether the muscle state signal is relaxed or not;
the acquisition unit is used for acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal if the muscle state signal is relaxed;
the calculation unit is used for calling a plurality of scene pictures from a preset scene picture database, and calculating the similarity between the current environment picture and the scene picture according to a preset picture similarity calculation method so as to obtain a plurality of similarity values respectively corresponding to the scene pictures;
the selecting unit is used for selecting a maximum similarity value from the similarity values and judging whether the maximum similarity value is larger than a preset similarity threshold value or not;
a comparison video obtaining unit, configured to obtain, if the maximum similarity value is greater than a preset similarity threshold, a specified comparison video corresponding to the maximum similarity value according to a correspondence between a preset similarity value, a scene picture, and a comparison video;
the display unit is used for acquiring a real-time video by using a second camera preset on the augmented reality terminal, performing integrated processing on the real-time video and the specified contrast video by using a preset augmented reality instruction, and displaying an augmented reality image obtained through the integrated processing on a screen of the augmented reality terminal; wherein the resolution of the second camera is higher than the resolution of the first camera.
8. The apparatus according to claim 7, wherein a first pattern drawn by a light conversion material is present in the current environment corresponding to the first camera, the light conversion material is capable of converting infrared light into green visible light, and a second pattern of green color is preset in the scene picture; the acquisition unit includes:
the starting unit is used for starting a preset infrared flash lamp and acquiring a current environment picture by adopting a first camera preset on the augmented reality terminal, so that a green first pattern exists in the current environment picture;
and the first identification unit is used for identifying a first pattern from the current environment picture according to a preset pattern identification method.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202010906391.9A 2020-09-01 2020-09-01 Augmented reality image display method, device, equipment and storage medium Pending CN112034985A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010906391.9A CN112034985A (en) 2020-09-01 2020-09-01 Augmented reality image display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010906391.9A CN112034985A (en) 2020-09-01 2020-09-01 Augmented reality image display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112034985A true CN112034985A (en) 2020-12-04

Family

ID=73590939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010906391.9A Pending CN112034985A (en) 2020-09-01 2020-09-01 Augmented reality image display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112034985A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011132382A1 (en) * 2010-04-19 2011-10-27 日本電気株式会社 Information providing system, information providing method, and program for providing information
CN110809187A (en) * 2019-10-31 2020-02-18 Oppo广东移动通信有限公司 Video selection method, video selection device, storage medium and electronic equipment
US20200097083A1 (en) * 2018-09-26 2020-03-26 Qiushi Mao Neuromuscular control of physical objects in an environment
CN111311758A (en) * 2020-02-24 2020-06-19 Oppo广东移动通信有限公司 Augmented reality processing method and device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011132382A1 (en) * 2010-04-19 2011-10-27 日本電気株式会社 Information providing system, information providing method, and program for providing information
US20200097083A1 (en) * 2018-09-26 2020-03-26 Qiushi Mao Neuromuscular control of physical objects in an environment
CN110809187A (en) * 2019-10-31 2020-02-18 Oppo广东移动通信有限公司 Video selection method, video selection device, storage medium and electronic equipment
CN111311758A (en) * 2020-02-24 2020-06-19 Oppo广东移动通信有限公司 Augmented reality processing method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
JP6249263B2 (en) Face recognition system, face recognition method, display control apparatus, display control method, and display control program
JP4792824B2 (en) Motion analysis device
CN110222672A (en) The safety cap of construction site wears detection method, device, equipment and storage medium
CN110569874A (en) Garbage classification method and device, intelligent terminal and storage medium
JP7286010B2 (en) Human body attribute recognition method, device, electronic device and computer program
KR20200145827A (en) Facial feature extraction model learning method, facial feature extraction method, apparatus, device, and storage medium
CN112489143A (en) Color identification method, device, equipment and storage medium
CN110059666B (en) Attention detection method and device
CN113160087B (en) Image enhancement method, device, computer equipment and storage medium
CN113179421B (en) Video cover selection method and device, computer equipment and storage medium
CN112034985A (en) Augmented reality image display method, device, equipment and storage medium
CN106910207B (en) Method and device for identifying local area of image and terminal equipment
CN109697421A (en) Evaluation method, device, computer equipment and storage medium based on micro- expression
CN112950443A (en) Adaptive privacy protection method, system, device and medium based on image sticker
CN113569594A (en) Method and device for labeling key points of human face
CN112418033A (en) Landslide slope surface segmentation and identification method based on mask rcnn neural network
CN116055806A (en) Mode switching processing method and device of intelligent terminal, terminal and storage medium
WO2017035390A1 (en) System, method, and apparatus for a color search
CN110442242A (en) A kind of smart mirror system and control method based on the interaction of binocular space gesture
CN113808107B (en) Image recommendation method, device, electronic equipment and storage medium
CN112822393B (en) Image processing method and device and electronic equipment
CN113127663B (en) Target image searching method, device, equipment and computer readable storage medium
CN111079617B (en) Poultry identification method and device, readable storage medium and electronic equipment
CN110781739B (en) Method, device, computer equipment and storage medium for extracting pedestrian characteristics
CN113129227A (en) Image processing method, image processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201204

RJ01 Rejection of invention patent application after publication