CN111222537A - Augmented reality system and method capable of being rapidly manufactured and propagated - Google Patents

Augmented reality system and method capable of being rapidly manufactured and propagated Download PDF

Info

Publication number
CN111222537A
CN111222537A CN201911136300.1A CN201911136300A CN111222537A CN 111222537 A CN111222537 A CN 111222537A CN 201911136300 A CN201911136300 A CN 201911136300A CN 111222537 A CN111222537 A CN 111222537A
Authority
CN
China
Prior art keywords
module
augmented reality
identification
contour
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911136300.1A
Other languages
Chinese (zh)
Inventor
雷雨川
许金磊
唐晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Event Horizon Technology Co Ltd
Original Assignee
Hangzhou Event Horizon Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Event Horizon Technology Co Ltd filed Critical Hangzhou Event Horizon Technology Co Ltd
Priority to CN201911136300.1A priority Critical patent/CN111222537A/en
Publication of CN111222537A publication Critical patent/CN111222537A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a system and a method capable of rapidly manufacturing and spreading augmented reality, which comprises an image acquisition module, an identification marking module, a matching preprocessing module, a matching processing analysis module, a directional issuing module and an identification display module; the image acquisition module transmits image information to the matching preprocessing module through the identification marking module and then transmits the image information to the matching processing analysis module; the matching processing and analyzing module transmits the processed information to the identification and display module through the directional issuing module. The method comprises the steps of 1, collecting image information; 2. an image information identification mark; 3. preprocessing before image information matching; 4. matching, processing and storing the image information; 5. transmitting the image information; 6. and identifying and displaying the image information. The invention aims to provide a system capable of quickly manufacturing and spreading augmented reality, which can quickly generate an image identification mark, match the image identification mark with a conventional video material, generate a matched augmented reality effect and reduce the manufacturing difficulty.

Description

Augmented reality system and method capable of being rapidly manufactured and propagated
Technical Field
The invention relates to the technical field of augmented reality, in particular to a system and a method for rapidly manufacturing and spreading augmented reality.
Background
The augmented reality technology mainly combines a real shot image and a manufactured model (or video) through an image composite algorithm to display a new picture with depth of field.
Existing augmented reality systems or devices focus on matching a pre-created model (or video) with pre-set identification information (identification tags).
Secondly, in the existing augmented reality system framework, the generated picture has no propagation characteristics, and the issuing mode is uniform issuing, that is, any viewer sees uniform picture information, that is, corresponding to the same viewing scene, the picture information presented to the user is the same (or fixed pseudo-random presentation), and cannot be substantially distinguished. And the existing equipment can not be used for presenting and reading massive viewfinder scenes and corresponding picture information in a one-to-one correspondence manner due to the limitation of the memory capacity and the storage capacity of the existing electronic equipment. Except that a large amount of operation matching information is transmitted by the cloud server and then is transmitted back to the terminal, theoretically, the augmented reality effect which can be recognized by a single device has an upper limit.
In terms of specific application, the identification mark is searched from the image on the display screen of the electronic equipment in the first step of the traditional augmented reality system, and no hunting is involved in the free manufacturing and generating method of the identification mark; the effect generated by the original augmented reality system can only bind one associated effect to the same mark, and the associated effect cannot be directionally controlled (the device A can see and the device B cannot see; the device A can see, the device B can see after the device A authorizes the device B; and the device A can see the scene a and the device B can see the scene B) through the same identification mark. Based on the above two points, augmented reality is limited by manufacturing difficulty, lack of propagation and slow development of directional identification.
The Chinese patent application numbers are: 201611146660.6, application date is 2016, 12, 13, publication date is: 23/06/2017, with patent names: the invention discloses a virtual object distribution method and device based on augmented reality, and discloses an interaction mode which combines an online demand distributed for a user to distribute a virtual object with an offline image scanning operation of the user using an augmented reality client based on an AR technology. The user can actively execute image scanning aiming at a preset graphic identifier in an offline environment through the AR client to trigger the AR server to issue an electronic certificate for extracting a virtual object to the augmented reality client, and the electronic certificate issued by the AR server is collected through the AR client; when the category number of the electronic certificates collected by the user reaches a preset threshold value, the user can obtain the distribution authority of the virtual object, the AR client can actively send a virtual object distribution request containing a plurality of electronic certificates and the category number of the electronic certificates being the preset threshold value to the AR server, and the AR server distributes the object to the user from a preset virtual object set, so that the interactivity and the interestingness of virtual object distribution can be remarkably improved.
The patent literature discloses a virtual object allocation method and device based on augmented reality, but the invention also does not solve the problems that the augmented reality technology is limited by manufacturing difficulty, lacks of propagation and slow development of directional identification.
Disclosure of Invention
In view of this, the present invention provides a system and a method for quickly generating an image recognition mark and matching the image recognition mark with a conventional video material to generate a matching augmented reality effect, thereby reducing the manufacturing difficulty and being capable of quickly manufacturing and propagating augmented reality.
In order to realize the purpose of the invention, the following technical scheme can be adopted:
a rapid manufacturing and propagation augmented reality system comprises an image acquisition module, an identification marking module, a matching preprocessing module, a matching processing analysis module, a directional issuing module and an identification display module; the image acquisition module is used for acquiring image information; the identification mark module is used for processing the image information and generating an identification mark; the matching preprocessing module is used for analyzing and processing the contour information of the image information; the matching processing analysis module is used for further analyzing and processing the image information processed by the matching preprocessing module; the directional issuing module is used for directionally transmitting the image information; the identification display module is used for identifying, converting and displaying the image information;
the image acquisition module transmits image information to the matching preprocessing module through the identification marking module, and the matching preprocessing module transmits preprocessed image information to the matching processing analysis module; and the matching processing and analyzing module transmits the processed information to the identification and display module through the directional issuing module.
The image acquisition module comprises a video acquisition module.
The image acquisition module comprises a picture acquisition module
The identification marking module comprises a plane extraction contour identification module or a fitting processing identification module or an affine transformation module.
The matching preprocessing module comprises an image classification module.
The matching preprocessing module comprises an image cropping module.
The directional issuing module comprises an identification mark issuing module or a geographic position issuing module or a point-to-point issuing module.
In order to achieve the second object of the present invention, the following technical solutions may be adopted:
a method for rapidly making and spreading augmented reality comprises the following steps:
step 1) collecting image information;
step 2) identifying and marking the acquired image information;
step 3) preprocessing the image information of the identification mark before matching;
step 4), matching, processing and storing the image information;
step 5), transmitting image information;
and 6) identifying and displaying the image information.
The step 1) comprises the step of collecting video information through a video collecting module.
And the video acquisition comprises recording image information through the mobile terminal, and preliminarily compressing to obtain target image video information.
And the step 2) comprises the steps of making the collected image information into one surface of the picture of the identification mark, and determining the closed continuous edge contour of one surface of the picture, and recording the closed continuous edge contour as an initial contour.
And the initial contour comprises the steps of carrying out contour sharpening and polygon fitting treatment according to the contour characteristics, namely carrying out vector transformation or maximum superposition judgment transformation on the contour approximate shape to change the taken contour shape into the initial shape of the standard rule graph.
And the initial contour comprises the steps of carrying out contour sharpening and polygon fitting treatment according to the contour characteristics, namely carrying out vector transformation or maximum superposition judgment transformation on the contour approximate shape to change the shape of the contour into the initial shape of a standard rule graph, and recording the initial shape as the standard contour.
The initial contour also comprises the situation that the image information has a plurality of closed continuous edge contours in the same picture, screening is carried out according to the position and the size of the contour, and a centered and relatively large contour is selected as the initial contour by default.
And the step 2) comprises cutting out the image in the acquired image information standard outline and recording the image as an original image.
Performing secondary vector processing on the standard outline characteristics in an affine transformation mode to enable the shape characteristics of the image to be closer to the physical outline characteristics of the object, and taking the shape as an identification outline; and performing affine transformation on pixel points in the original image to the identification contour region, and taking the generated image as an identification mark.
The identification mark comprises the steps of sharpening a random initial contour into a random polygon, and then performing approximate fitting on the polygon to form a quadrilateral standard contour; finally, the quadrilateral vector is converted into the nearest rectangle as the recognition contour
The standard outline can be converted into an identification outline and an identification mark is obtained, and the standard outline is marked as a quasi-identification mark.
The standard contour can be converted into an identification contour through an intelligent gyroscope, a polar coordinate matrix of the gyroscope during scanning is called, the inclination angle of equipment during scanning is analyzed, projection transformation is carried out on the standard contour according to the inclination angle, and an accurate identification contour is generated; or for the terminal equipment with multiple cameras, the inclination angle of the object in three-dimensional imaging is judged through a depth-of-field algorithm, and the identification contour is generated after projection transformation
The standard outline is directly used as an identification outline, and the identification outline can perform contrast global equalization processing and illumination weakening processing on an identification mark picture, so that the influence of light on the picture quality is reduced
The step 3) comprises the steps of identifying the rectangular outline of the image information, zooming and rotating the rectangular outline, completely placing the rectangular outline in the target video picture range, cutting the target video image according to the shape of the outline, and recording the cut video as effect video information
The cropping is to overlap the identified contour with the center point of the target video for video information, rotate and vector scale the identified contour to ensure that the identified contour is completely inside the target video boundary and the area is maximized.
The step 3) comprises the steps of identifying the non-rectangular outline of the image information, and making a rectangular vector mask on the non-rectangular outline to enable the mask to completely cover the outline; the mask profile is marked as an extended profile.
The step 4) comprises extracting characteristic value information of the image information and matching the characteristic value information with the effect video; and correlating the image information, the characteristic value information of the image identification mark, the matching information and the video data, recording as the correlated data of the augmented reality effect, and storing the correlated data in the cloud server.
The step 6) comprises identifying the picture in the scene and converting the picture into a characteristic value of a computer language, comparing the characteristic value with all the characteristic values of the identification marks related to the terminal equipment in a partition detection or multilayer neural network machine learning mode, and achieving certain similarity with the characteristic value of a certain identification mark, namely, the identification is successful, otherwise, the identification is failed; after the object is successfully identified, the playing area of the effect video is adjusted according to the target size of the identified object and the current attitude data, and the augmented reality effect is achieved.
The technical scheme provided by the invention has the beneficial effects that 1) the invention generates the image identification mark rapidly and matches the image identification mark with the conventional video material to generate the matched augmented reality effect, thereby greatly reducing the manufacturing difficulty of the augmented reality effect; 2) the invention has a specific issuing mode, and solves the problems of weak spreading performance and short board lack of directional identification of the existing augmented reality system; 3) the invention has good effect, meets the market demand and is suitable for general popularization.
Drawings
FIG. 1 is a block diagram of a system for rapid production and propagation of augmented reality according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for rapidly manufacturing and propagating augmented reality according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and embodiments thereof.
Example 1
Referring to fig. 1, a system capable of rapidly manufacturing and propagating augmented reality includes an image acquisition module 1, an identification mark module 2, a matching preprocessing module 3, a matching processing analysis module 4, a directional issuing module 5, and an identification display module 6; the image acquisition module 1 is used for acquiring image information; the identification marking module 2 is used for identifying and marking the image information; the matching preprocessing module 3 is used for analyzing and processing the contour information of the image information; the matching processing analysis module 4 is used for further analyzing and processing the image information processed by the matching preprocessing module; the directional issuing module 5 is used for directionally transmitting the image information; the identification display module 6 is used for identifying, converting and displaying the image information;
the image acquisition module 1 transmits image information to the matching preprocessing module 3 through the identification marking module 2, and the matching preprocessing module 3 transmits preprocessed image information to the matching processing analysis module 4; the matching processing and analyzing module 4 transmits the processed information to the identification and display module 6 through the directional issuing module 5.
In this embodiment, preferably, the image capturing module 1 includes a video capturing module 11. The video acquisition module 11 records image information through the mobile terminal, and primarily compresses the image information to obtain target image video information.
In this embodiment, preferably, the image capturing module includes a picture capturing module, and the picture capturing module captures a still image.
In this embodiment, the image acquisition module 1 may acquire the picture information first and then acquire the video information. The collected image information is converted into an identification mark in a subsequent identification mark module, and the collected video information is adjusted and corrected in the matching preprocessing module and is displayed in the identification display module.
In this embodiment, the identification marking module 2 includes a plane extraction contour recognition module 21 or includes a fitting process recognition module 22 or an affine transformation module 23.
The plane contour extraction module 21 is configured to identify a contour of a certain plane of an image for an acquired image;
the fitting processing identification module 22 is used for identifying the acquired image and performing edge lubrication on the image to form a regular graph;
the affine transformation module 23 is configured to eliminate an angle error of the captured image, and is closer to a physical contour of a real picture.
In this embodiment, the matching preprocessing module 3 is mainly used for performing video correspondence and clipping on a target image; the matching pre-processing module 3 includes an image classification module 31 and an image cropping module 32.
The image classification module 31 is used for classifying the identified image information according to a certain rule;
the image cropping module 32 overlaps the identified contour with the center point of the target video, and performs rotation and vector scaling on the identified contour to ensure that the identified contour is completely inside the boundary of the target video and the area is maximized.
The matching processing and analyzing module 4 can realize the augmented reality effect by analyzing the processed image.
In this embodiment, the directional issuing module 5 includes an identification mark issuing module 51, a geographic location issuing module 52, or a point-to-point issuing module 53.
The identification mark issuing module 51 is used for transmitting the identification mark of the image.
The geographic position issuing module 52 is configured to transmit image information according to the geographic information tag information.
The point-to-point issuing module is used for point-to-point directional information transmission.
The invention is different from the traditional augmented reality system, the augmented reality effect can be realized only locally or all the equipment can be made visible through a cloud end, and the embodiment comprises a plurality of information transmission modes. Make augmented reality effect produce multiple directional propagation mode, the preparation equipment of augmented reality effect or by the visible equipment of authorized equipment selection according to certain rule, respected privacy protection when conveniently propagating. Meanwhile, for non-target equipment, the system can not send out relevant identification information and video data information, so that the identification pressure brought by the increase of the content production amount of the augmented reality system is greatly relieved, and the memory pressure and the storage pressure of the equipment are released.
In the embodiment, the invention provides a binding mode of an augmented reality effect and terminal equipment. The cloud server records data number information (Associated _ data _ ID) of each set of Associated data in the system, and each terminal Device generates unique Device number information (Device _ ID) when using the system. For each piece of equipment number information, the cloud server records all data number information forms related to the equipment number information. When a certain device is authorized by a certain augmented reality effect (visible when the device is turned on), the data number information of the augmented reality effect is added to the information form corresponding to the device number. When the equipment identifies whether an identification mark exists in a certain scene, the equipment only issues the identification mark according to the corresponding information form to carry out characteristic value matching.
In this embodiment, in the transmission process, the manufacturing device or the authorized device has the authorization right, and the right to propagate the augmented reality effect again can be selectively given to the transmitted device, so as to complete the authorization. The authorization form includes, but is not limited to, authorization propagation type, authorization propagation time, authorization propagation number, and the like. Chain type or multi-pole type propagation of certain augmented reality effect is realized.
Furthermore, the binding of the augmented reality effect is not limited to the terminal equipment, an independent account can be applied to the system through the intelligent terminal equipment, the account information (User _ ID) is used for replacing the equipment number information, and the account information corresponds to the data number information. Based on the scheme, the issuing of the associated data can be realized through the issuing of the account information or the combination of the account information and the equipment number information.
In this embodiment, the directional issuing module 5 may perform directional propagation for a certain augmented reality effect, and the manufacturing device or the authorized device completes the directional propagation by inputting the coding information of the propagated device. In the process, a replication instruction is uploaded to the cloud server, the cloud server adds the corresponding data number information to the data number information form of the transmitted equipment, and when the transmitted equipment identifies a certain scene, the identification mark corresponding to the enhancement effect can be identified smoothly.
Further, one-to-one directional propagation can be expanded to one-to-many directional propagation, many-to-one directional propagation, and many-to-many directional propagation. That is, a plurality of augmented reality effects which can be made or authorized by a certain device can be simultaneously transmitted to a plurality of authorized devices by the method, and the augmented reality effects are controlled and issued according to the method of adding the data number information to the corresponding form.
In this embodiment, the directional issuing module 5 may also propagate the effect to a propagated device that satisfies a certain augmented reality effect by setting a certain rule for a manufacturing device or an authorized device. When a certain device meets a specific rule when identifying a certain scene, the cloud server adds the corresponding data number information to the data number information form of the transmitted device and completes the issuing.
In this embodiment, the geographic location issuing module 52 includes, according to a specific rule, geographic location information (based on Lbs control transmission rule) where the terminal is located when the scene is identified; identifying a time period when a scene is present; the first plurality of terminals for identifying scenes; the manufacturer, the model and the system version number rule of the intelligent terminal equipment.
Furthermore, besides the rules of the system, the system can add a unique distinguishing mode or type to each device using the system in a certain mode, generate a new propagation rule and inform all terminals in the system. The terminal device may also select a propagation rule generated by such a system when selecting a propagation rule. Such unique means of differentiation include, but are not limited to, the time at which the device accesses the system; the geographic location of the device when it accesses the system; the equipment batch number of the same access system; payment behavior, activity level of the device within the system; the personnel's human defined division of access devices, etc.
In this embodiment, the rule is that all devices in the system are propagated when the rule is empty, and the system controls the amount of the augmented reality effect issued irregularly by controlling the amount of the terminal devices that can select such propagation forms and the amount of the propagable effect thereof, so as to prevent the terminal devices using the system from recognizing pressure caused by excessive unified recognition effect.
In this embodiment, in addition to the orientation, the regularity rule and the rule in the system, an auxiliary issuing module may be added to the system. The auxiliary issuing module comprises an auxiliary identification tool and an auxiliary propagation control system. The auxiliary recognition tool is a plane material with patterns, such as a sticker, and the terminal device places the auxiliary recognition tool on an object to be recognized before generating the recognition mark, so that the generated recognition mark can be provided with the patterns of the auxiliary recognition tool. The auxiliary propagation control system identifies whether a scene has a corresponding auxiliary identification pattern through an image identifying function before a certain terminal device identifies the scene, and if so, a normal identification process is started again; if not, the identification is not carried out. An operator who produces the augmented reality effect by the method can control the propagation of the augmented reality effect through the propagation auxiliary recognition tool.
In this embodiment, by the present invention, the propagation device may also make a specific limitation on propagation, and control the acquisition permission of the propagated device on the augmented reality effect. If the augmented reality effect is permanently visible after identification, namely after the transmitted device meets a certain rule or is directionally transmitted, the augmented reality effect can still be identified when the scene is identified next time and the rule is not met or the directional target is changed. If only the recognition is visible at that time, the augmented reality effect may only be recognized by the dissemination device if the specific rules are met or if the targeted object is still directional when recognized. Some implementation is to add control fields corresponding to different types on the data number information form to control whether the data number information exists in the form temporarily or for a long time.
In this embodiment, the identifying and displaying module 6 includes identifying a picture in a scene and converting the picture into a feature value of a computer language, comparing the feature value with feature values of all identification marks related to the terminal device in a partition detection or multi-layer neural network machine learning manner, and achieving a certain similarity with the feature value of a certain identification mark, i.e., the identification is successful, otherwise, the identification is failed; after the object is successfully identified, the playing area of the effect video is adjusted according to the target size of the identified object and the current attitude data, and the augmented reality effect is achieved.
Example 2
Referring to fig. 2, in order to achieve the second object of the present invention, the following technical solutions may be adopted:
a method for rapidly making and spreading augmented reality comprises the following steps:
step 1) image information acquisition S1 is carried out;
step 2) identifying and marking the acquired image information S2;
step 3) preprocessing before matching the image information of the identification mark S3;
step 4) saving the image information matching process S4;
step 5) transmitting the image information S5;
step 6) identifies and displays the image information S6.
In this embodiment, the step 1) includes performing video capture on the image information through the video capture module 11.
And the video acquisition comprises recording image information through the mobile terminal, and preliminarily compressing to obtain target image video information.
In this embodiment, the step 2) includes making the collected image information into one surface of the picture of the identification mark, and determining a closed continuous edge contour of one surface of the picture, which is recorded as an initial contour.
In this embodiment, the method includes analyzing and processing the collected picture information, and determining a closed continuous edge contour of the picture, which is recorded as an initial contour.
In this embodiment, an intelligent terminal with a camera is used to shoot a certain surface of an object image to be made into an identification mark, the certain surface of the object image is determined by gaussian filtering algorithm, gray-scale processing, edge detection algorithm, expansion algorithm and other methods, the certain surface of the image is sealed with a continuous edge contour, and the continuous edge contour is recorded as an initial contour
The initial contour comprises the steps of carrying out contour sharpening and polygon fitting processing according to contour features, namely carrying out vector transformation or maximum overlap judgment transformation on the approximate shape of the contour, and changing the shape of the contour into the initial shape of a standard rule graph (for example, book covers are converted into quadrangles) to be recorded as a standard contour.
In this embodiment, when a scanned image is captured, a situation that a plurality of closed continuous edge profiles exist on the same picture occurs, the invention performs screening according to the position and size of the profile, and selects a centered and relatively large profile as an initial profile by default.
In this embodiment, the step 2) further includes cutting out an image in the acquired standard outline of the image information, and recording the cut image as an original image.
Performing secondary vector processing on the standard outline characteristics in an affine transformation mode to enable the shape characteristics of the image to be closer to the physical outline characteristics of the object, and taking the shape as an identification outline; and performing affine transformation on pixel points in the original image to the identification contour region, and taking the generated image as an identification mark.
In this embodiment, the identification mark includes sharpening the irregular initial contour into an irregular polygon, and then performing approximate fitting on the polygon to form a standard quadrangular contour; and finally, converting the quadrilateral vector into the closest rectangle as the recognition contour.
In this embodiment, a whole or a part of a certain surface of a picture taken of an object is a rectangular plane, for example, a book cover is a rectangular plane. Because of the problem of the photographing angle, the cover can hardly be guaranteed to be vertically photographed, the irregular initial contour is sharpened into an irregular polygon, and then the polygon is approximately fitted to form a quadrilateral standard contour. And finally, converting the quadrilateral vector into the closest rectangle as the recognition contour.
The implementation method of the identification mark making module is called as a scanning extraction identification mark method.
In this embodiment, the standard contour may be converted into an identification contour and an identification mark is obtained, and then the identification mark is recorded as a quasi-identification mark. For the picture information that the identification contour can not be converted into the rectangle, the identification contour can also be converted into the identification contour and the identification mark can be obtained.
In this embodiment, the scanned image extraction identification marking method is not limited to performing identification marking extraction on an object with a rectangular plane, and includes not limited to performing similar processing on an object with a regular plane, an irregular plane, or an approximate plane, and a generated standard outline is a non-rectangular figure with a continuous and closed plane. The standard outline can be converted into a more accurate identification outline through a certain algorithm to obtain an identification mark, and the identification mark of the type needs to be secondarily processed in a subsequent module and is marked as an accurate identification mark
A specific method for transforming the standard profile of the above type is: calling a polar coordinate matrix of a gyroscope during scanning by using the gyroscope of the intelligent terminal, calculating the inclination angle of equipment during scanning, and performing projection transformation (namely affine transformation) on the standard contour at the angle to generate a precise identification contour; or for the terminal equipment with multiple cameras, judging the inclination angle of the object in the three-dimensional imaging through a depth-of-field algorithm, and generating an identification contour after projection transformation.
If the step 2) lacks subsequent processing of the original image, the standard contour is directly used as the identification contour, which can cause the subsequent matching and the identification rate to be reduced, but still does not depart from the scope of the implementation method of the invention.
The subsequent processing of the original image is lacked, the identification mark picture can be subjected to contrast global equalization processing and illumination weakening processing so as to reduce the influence of external factors such as light rays on the picture quality, and the picture after noise reduction has more accurate characteristic value and higher similarity with the mark surface of the shot object. The accuracy of the subsequent identification module is improved.
The step 2) comprises directly taking the standard outline as an identification outline or directly taking the original picture as an identification mark after being cut by a fixed proportion.
Furthermore, after the standard contour is generated, an operator can manually modify and transform the standard contour based on own observation and perception, and the influence caused by inaccurate initial contour due to algorithm error or system contour screening misjudgment is modified, so that the standard contour is more accurate.
Further, the form of scanning, extracting and identifying the mark method is not limited to scanning in the shooting process, and has the following simplified form:
1. and selecting a view finding mode with a fixed cutting frame, and moving the shooting angle to enable the surface of the object to be overlapped with the cutting frame as much as possible. After shooting, the cutting frame is fixed as an identification outline, and the graph in the cutting frame is used as an identification mark. The fixed crop box can have various shapes and sizes.
2. And directly taking a picture of the object, zooming, cutting and adjusting the generated picture, wherein the outline of the adjusted picture is used as an identification outline, and the adjusted picture is used as an identification mark.
In the two simplified forms, the manual operation is obviously increased, and the success rate of subsequent matching identification and the edge matching accuracy are reduced (the fitness of the augmented reality effect is low), but the invention does not depart from the protection scope.
In this embodiment, the step 3) includes a step of recognizing a rectangular outline of the image information, the rectangular outline is scaled and rotated, and is completely placed in the target video image range, the target video image is cut according to the shape of the outline, and the cut video is recorded as the effect video information. The effect video information identifies the video played while being displayed, which is rectangular at this time.
The cropping is to overlap the identified contour with the center point of the target video for video information, rotate and vector scale the identified contour to ensure that the identified contour is completely inside the target video boundary and the area is maximized. The size and relative position of the identified contour at this time are used as clipping parameters. The fidelity effect of the target video cut by the parameters is good (the interception range is maximized and the direction of the effect video is the same as that of the target video).
The step 3) comprises the steps of identifying the non-rectangular outline of the image information, and making a rectangular vector mask on the non-rectangular outline to enable the mask to completely cover the outline; the mask profile is marked as an extended profile.
In this embodiment, the target video is cut according to the extended contour, and a quasi-effect video is generated. The method for generating the quasi-effect video comprises the steps of identifying a rectangular outline of image information, zooming and rotating the rectangular outline, completely placing the rectangular outline in a target video picture range, cutting a target video image according to the shape of the outline, and recording the cut video as effect video information.
In this embodiment, the region outside the image recognition contour and inside the Extended contour is referred to as an Extended _ area. And (4) performing alpha channel processing (transparent processing) on the extension area, attaching the extension area outside the quasi-identification mark, and using the generated new pattern as the identification mark.
In this embodiment, the non-rectangular identification contour is generated by taking the four outermost points of the identification contour in the plane coordinate system, and determining a rectangular mask having no rotation angle with the plane coordinate by the four outermost points. The extended area generated by the method is relatively small, and the augmented reality effect has no rotation deviation.
If the rectangular plane is lack of the rectangular identification outline, the target video is directly used as the final video, the augmented reality effect can still be played smoothly through the subsequent modules, but the video can generate obvious extrusion deformation or rotation, and the effect is poor. (most of the existing augmented reality systems do not process video.)
For non-rectangular identified contours, the alpha channel value for the extended region of the alignment effect video is set to 0, resulting in a new result set. And mixing the result set with the video through a mask technology to generate an effect video.
Furthermore, the target video can be directly cut according to the identification contour, the final augmented reality effect is not greatly different from the above, but the method can greatly consume the CPU calculation power of the terminal equipment, and the method is also in the protection scope of the patent.
In this embodiment, for the non-rectangular identification contour, through the analysis processing in step 3), the terminal device may identify the non-rectangular identification mark in the subsequent identification display module and play the attached non-rectangular video on the corresponding identifier.
In this embodiment, the step 4) includes extracting feature value information of the image information and matching the feature value information with the effect video; and correlating the image information, the characteristic value information of the image identification mark, the matching information and the video data, recording as the correlated data of the augmented reality effect, and storing the correlated data in the cloud server.
In this embodiment, the step 5) is different from the previous augmented reality system information transmission mode, the existing augmented reality effect can only be realized locally or all devices can be made visible through a cloud, the information transmission mode provided by the invention includes multiple information transmission modes, so that the augmented reality effect generates multiple directional transmission modes, and the manufacturing device or the authorized device of the augmented reality effect can select visible devices according to certain rules, so that privacy protection is respected while transmission is facilitated.
Meanwhile, for non-target equipment, the system can not send out relevant identification information and video data information, so that the identification pressure brought by the increase of the content production amount of the augmented reality system is greatly relieved, and the memory pressure and the storage pressure of the equipment are released.
Further, the invention provides a binding mode of the augmented reality effect and the terminal equipment. The cloud server records data number information (Associated _ data _ ID) of each set of Associated data in the system, and each terminal Device generates unique Device number information (Device _ ID) when using the system. For each piece of equipment number information, the cloud server records all data number information forms related to the equipment number information. When a certain device is authorized by a certain augmented reality effect (visible when the device is turned on), the data number information of the augmented reality effect is added to the information form corresponding to the device number. When the equipment identifies whether an identification mark exists in a certain scene, the equipment only issues the identification mark according to the corresponding information form to carry out characteristic value matching.
Further, in this embodiment, in the information transmission process, the making device or the authorized device has an authorized right, and the right to propagate an augmented reality effect again can be selectively given to the propagated device, so as to complete authorization. The authorization form includes, but is not limited to, authorization propagation type, authorization propagation time, authorization propagation number, and the like. Chain type or multi-pole type propagation of certain augmented reality effect is realized.
In this embodiment, the implementation of the augmented reality effect is not limited to the binding with the terminal device, and an independent account can be applied in the system through the intelligent terminal device, and the account information (User _ ID) is used to replace the device number information, and the account information corresponds to the data number information. Based on the scheme, the issuing of the associated data can be realized through the issuing of the account information or the combination of the account information and the equipment number information.
In this embodiment, step 6) includes, after the terminal device is turned on, recognizing a picture in a scene and converting the picture into a feature value of a computer language, comparing the feature value with feature values of all identification marks related to the terminal device in a partition detection or multi-layer neural network machine learning manner, and if the feature value of a certain identification mark is similar to the feature value of the certain identification mark, the recognition is successful, otherwise, the recognition is failed; after the object is successfully identified, the playing area of the effect video is adjusted according to the target size of the identified object and the current attitude data, and the augmented reality effect is achieved.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (24)

1. An augmented reality system capable of being rapidly manufactured and propagated is characterized in that: the system comprises an image acquisition module, an identification marking module, a matching preprocessing module, a matching processing analysis module, a directional issuing module and an identification display module; the image acquisition module is used for acquiring image information; the identification mark module is used for processing the image information and generating an identification mark; the matching preprocessing module is used for analyzing and processing the contour information of the image information; the matching processing analysis module is used for further analyzing and processing the image information processed by the matching preprocessing module; the directional issuing module is used for directionally transmitting the image information; the identification display module is used for identifying, converting and displaying the image information;
the image acquisition module transmits image information to the matching preprocessing module through the identification marking module, and the matching preprocessing module transmits preprocessed image information to the matching processing analysis module; and the matching processing and analyzing module transmits the processed information to the identification and display module through the directional issuing module.
2. The system for rapid production and dissemination of augmented reality according to claim 1, wherein: the image acquisition module comprises a video acquisition module.
3. The system for rapid production and dissemination of augmented reality according to claim 1, wherein: the image acquisition module comprises a picture acquisition module.
4. The system for rapid production and dissemination of augmented reality according to claim 1, wherein: the identification marking module comprises a plane extraction contour identification module or a fitting processing identification module or an affine transformation module.
5. The system for rapid production and dissemination of augmented reality according to claim 1, wherein: the matching preprocessing module comprises an image classification module.
6. The system for rapid production and dissemination of augmented reality according to claim 1, wherein: the matching preprocessing module comprises an image cropping module.
7. The system for rapid production and dissemination of augmented reality according to claim 1, wherein: the directional issuing module comprises an identification mark issuing module or a geographic position issuing module or a point-to-point issuing module.
8. A method for rapidly making and transmitting augmented reality is characterized in that: the method comprises the following steps:
step 1) collecting image information;
step 2) identifying and marking the acquired image information;
step 3) preprocessing the image information of the identification mark before matching;
step 4), matching, processing and storing the image information;
step 5), transmitting image information;
and 6) identifying and displaying the image information.
9. The method for rapid production and propagation of augmented reality of claim 8, wherein: the step 1) comprises the step of collecting video information through a video collecting module.
10. The method for rapid production and propagation of augmented reality of claim 9, wherein: and the video information acquisition comprises recording image information through the mobile terminal, and preliminarily compressing to obtain target image video information.
11. The method for rapid production and propagation of augmented reality of claim 8, wherein: and the step 2) comprises the steps of making the collected image information into one surface of the picture of the identification mark, and determining the closed continuous edge contour of one surface of the picture, and recording the closed continuous edge contour as an initial contour.
12. The method for rapid production and propagation of augmented reality of claim 11, wherein: and the initial contour comprises the steps of carrying out contour sharpening and polygon fitting treatment according to the contour characteristics, namely carrying out vector transformation or maximum superposition judgment transformation on the contour approximate shape to change the shape of the contour into the initial shape of a standard rule graph, and recording the initial shape as the standard contour.
13. The method for rapid production and propagation of augmented reality of claim 11, wherein: the initial contour also comprises the situation that the image information has a plurality of closed continuous edge contours in the same picture, screening is carried out according to the position and the size of the contour, and a centered and relatively large contour is selected as the initial contour by default.
14. The method for rapid production and propagation of augmented reality of claim 8, wherein: and the step 2) comprises cutting out the image in the acquired image information standard outline and recording the image as an original image.
15. The method for rapid production and propagation of augmented reality of claim 14, wherein: performing secondary vector processing on the standard outline characteristics in an affine transformation mode to enable the shape characteristics of the image to be closer to the physical outline characteristics of the object, and taking the shape as an identification outline; and performing affine transformation on pixel points in the original image to the identification contour region, and taking the generated image as an identification mark.
16. The method for rapid production and propagation of augmented reality of claim 15, wherein: the identification mark comprises the steps of sharpening a random initial contour into a random polygon, and then performing approximate fitting on the polygon to form a quadrilateral standard contour; and finally, converting the quadrilateral vector into the closest rectangle as the recognition contour.
17. The method for rapid production and propagation of augmented reality of claim 14, wherein: the standard outline can be converted into an identification outline and an identification mark is obtained, and the standard outline is marked as a quasi-identification mark.
18. The method for rapid production and propagation of augmented reality of claim 17, wherein: the standard contour can be converted into an identification contour, namely a polar coordinate matrix of a gyroscope during scanning is called by an intelligent gyroscope, the inclination angle of equipment during scanning is analyzed, and projection transformation is carried out on the standard contour at the angle to generate an accurate identification contour; or for the terminal equipment with multiple cameras, judging the inclination angle of the object in the three-dimensional imaging through a depth-of-field algorithm, and generating an identification contour after projection transformation.
19. The method for rapid production and propagation of augmented reality of claim 14, wherein: the standard outline is directly used as an identification outline, and the identification outline can perform contrast global equalization processing and illumination weakening processing on an identification mark picture, so that the influence of light on the picture quality is reduced.
20. The method for rapid production and propagation of augmented reality of claim 8, wherein: and 3) zooming and rotating the rectangular recognition outline of the image information, completely placing the rectangular recognition outline in a target video picture range, cutting the target video image according to the shape of the outline, and recording the cut video as effect video information.
21. The method for rapid production and propagation of augmented reality of claim 20, wherein: the cropping is to overlap the identified contour with the center point of the target video for video information, rotate and vector scale the identified contour to ensure that the identified contour is completely inside the target video boundary and the area is maximized.
22. The method for rapid production and propagation of augmented reality of claim 8, wherein: the step 3) comprises the steps of identifying the non-rectangular outline of the image information, and making a rectangular vector mask on the non-rectangular outline to enable the mask to completely cover the outline; the mask profile is marked as an extended profile.
23. The method for rapid production and propagation of augmented reality of claim 8, wherein: the step 4) comprises extracting characteristic value information of the image information and matching the characteristic value information with the effect video; and correlating the image information, the characteristic value information of the image identification mark, the matching information and the video data, recording as the correlated data of the augmented reality effect, and storing the correlated data in the cloud server.
24. The method for rapid production and propagation of augmented reality of claim 8, wherein: the step 6) comprises identifying the picture in the scene and converting the picture into a characteristic value of a computer language, comparing the characteristic value with all the characteristic values of the identification marks related to the terminal equipment in a partition detection or multilayer neural network machine learning mode, and achieving certain similarity with the characteristic value of a certain identification mark, namely, the identification is successful, otherwise, the identification is failed; after the object is successfully identified, the playing area of the effect video is adjusted according to the target size of the identified object and the current attitude data, and the augmented reality effect is achieved.
CN201911136300.1A 2019-11-19 2019-11-19 Augmented reality system and method capable of being rapidly manufactured and propagated Pending CN111222537A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911136300.1A CN111222537A (en) 2019-11-19 2019-11-19 Augmented reality system and method capable of being rapidly manufactured and propagated

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911136300.1A CN111222537A (en) 2019-11-19 2019-11-19 Augmented reality system and method capable of being rapidly manufactured and propagated

Publications (1)

Publication Number Publication Date
CN111222537A true CN111222537A (en) 2020-06-02

Family

ID=70829432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911136300.1A Pending CN111222537A (en) 2019-11-19 2019-11-19 Augmented reality system and method capable of being rapidly manufactured and propagated

Country Status (1)

Country Link
CN (1) CN111222537A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113358825A (en) * 2021-06-02 2021-09-07 重庆大学 Indoor air quality detector with assimilation algorithm
TWI775232B (en) * 2020-12-07 2022-08-21 中華電信股份有限公司 System and method for making audio visual teaching materials based on augmented reality
TWI830628B (en) * 2023-03-21 2024-01-21 華碩電腦股份有限公司 Image generation method an image generation device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130063487A1 (en) * 2011-09-12 2013-03-14 MyChic Systems Ltd. Method and system of using augmented reality for applications
CN105046213A (en) * 2015-06-30 2015-11-11 成都微力互动科技有限公司 Method for augmenting reality
US20190056791A1 (en) * 2014-06-26 2019-02-21 Leap Motion, Inc. Integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
CN110335292A (en) * 2019-07-09 2019-10-15 北京猫眼视觉科技有限公司 It is a kind of to track the method and system for realizing simulated scenario tracking based on picture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130063487A1 (en) * 2011-09-12 2013-03-14 MyChic Systems Ltd. Method and system of using augmented reality for applications
US20190056791A1 (en) * 2014-06-26 2019-02-21 Leap Motion, Inc. Integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
CN105046213A (en) * 2015-06-30 2015-11-11 成都微力互动科技有限公司 Method for augmenting reality
CN110335292A (en) * 2019-07-09 2019-10-15 北京猫眼视觉科技有限公司 It is a kind of to track the method and system for realizing simulated scenario tracking based on picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈明: "《互联网应用》", 中央广播电视大学出版社, pages: 101 - 102 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI775232B (en) * 2020-12-07 2022-08-21 中華電信股份有限公司 System and method for making audio visual teaching materials based on augmented reality
CN113358825A (en) * 2021-06-02 2021-09-07 重庆大学 Indoor air quality detector with assimilation algorithm
CN113358825B (en) * 2021-06-02 2023-03-24 重庆大学 Indoor air quality detector with assimilation algorithm
TWI830628B (en) * 2023-03-21 2024-01-21 華碩電腦股份有限公司 Image generation method an image generation device

Similar Documents

Publication Publication Date Title
US20230075270A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN108537155B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2020151750A1 (en) Image processing method and device
CN111222537A (en) Augmented reality system and method capable of being rapidly manufactured and propagated
CN109325933A (en) A kind of reproduction image-recognizing method and device
CN111783820A (en) Image annotation method and device
CN107368817B (en) Face recognition method and device
CN108717704A (en) Method for tracking target, computer installation based on fish eye images and computer readable storage medium
CN114820758A (en) Plant growth height measuring method, device, electronic device and medium
CN113688680B (en) Intelligent recognition and tracking system
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
CN115019515B (en) Imaging control method and system
CN113778233B (en) Method and device for controlling display equipment and readable medium
CN110211155A (en) Method for tracking target and relevant apparatus
CN113225484B (en) Method and device for rapidly acquiring high-definition picture shielding non-target foreground
CN115345927A (en) Exhibit guide method and related device, mobile terminal and storage medium
CN114140839A (en) Image sending method, device and equipment for face recognition and storage medium
US10237614B2 (en) Content viewing verification system
CN116091366B (en) Multi-dimensional shooting operation video and method for eliminating moire
CN115862089B (en) Security monitoring method, device, equipment and medium based on face recognition
CN111145361A (en) Naked eye 3D display vision improving method
CN108280802A (en) Image acquiring method and device based on 3D imagings
CN111161399B (en) Data processing method and assembly for generating three-dimensional model based on two-dimensional image
CN114218423B (en) 5G-based non-labeling solid wood board identity digitalization method, device and system
CN116664895B (en) Image and model matching method based on AR/AI/3DGIS technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination