CN108875460A - Augmented reality processing method and processing device, display terminal and computer storage medium - Google Patents
Augmented reality processing method and processing device, display terminal and computer storage medium Download PDFInfo
- Publication number
- CN108875460A CN108875460A CN201710340898.0A CN201710340898A CN108875460A CN 108875460 A CN108875460 A CN 108875460A CN 201710340898 A CN201710340898 A CN 201710340898A CN 108875460 A CN108875460 A CN 108875460A
- Authority
- CN
- China
- Prior art keywords
- target object
- information
- frame
- video
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 230000003190 augmentative effect Effects 0.000 title claims description 9
- 239000013598 vector Substances 0.000 claims description 36
- 238000000034 method Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 8
- 241000208340 Araliaceae Species 0.000 claims description 5
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 5
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 5
- 235000008434 ginseng Nutrition 0.000 claims description 5
- 238000012512 characterization method Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000009835 boiling Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000005336 cracking Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a kind of AR processing method and processing device, display terminal and computer storage mediums.The AR processing method includes:Obtain the AR information of target object in the video;Track display position of the target object in the current image frame of the video currently shown;According to the display position, by the AR information superposition into the current image frame.
Description
Technical field
The present invention relates to information technology fields more particularly to a kind of augmented reality (Augmented Reality, AR) to handle
Method and device, display terminal and computer storage medium.
Background technique
AR is to carry out various superpositions on the image acquired based on real world to carry out one kind of display expansion of content
Display technology.The information introduced in AR in the prior art needs to be shown in around corresponding Drawing Object.In the prior art,
The problem of AR information deviates corresponding Drawing Object is generally occurred within, while there is likely to be needing to keep current video static,
After AR acquisition of information the problem of carrying out the Overlapping display of AR information.The deviation of AR information and this for stopping current video
Kind AR processing method, it is clear that can poor user experience.
Summary of the invention
In view of this, referring to an embodiment of the present invention is intended to provide a kind of AR processing method and boiling to you, display terminal and calculating
Machine storage medium to reduce the problem of AR information deviates corresponding Drawing Object, or needs the problem of stopping video.
In order to achieve the above objectives, the technical proposal of the invention is realized in this way:
First aspect of the embodiment of the present invention provides a kind of augmented reality AR processing method, is applied in display terminal, including:
Based on video flowing, video is shown;
Obtain the AR information of target object in the video;
Track display position of the target object in the current image frame of the video currently shown;
According to the display position, by the AR information superposition into the current image frame.
Second aspect of the embodiment of the present invention provides a kind of augmented reality AR processing unit, is applied in display terminal, including:
Display unit shows video for being based on video flowing;
Acquiring unit, for obtaining the AR information of target object in the video;
Tracking cell, for tracking display of the target object in the current image frame of the video currently shown
Position;
The display unit is also used to according to the display position, by the AR information superposition to the current image frame
In.
The third aspect of the embodiment of the present invention provides a kind of display terminal, including:
Display is shown for information;
Memory, for storing computer program;
Processor is connect with the display and the memory, for controlling institute by executing the computer program
It states display terminal and executes the above-mentioned AR processing method of any one.
Fourth aspect of the embodiment of the present invention provides a kind of computer storage medium, stores meter in the computer storage medium
Calculation machine program;The computer program, for can be realized the above-mentioned AR processing method of any one after being executed by processor.
The embodiment of the present invention, which provides AR processing method and boils, gives you finger, display terminal and computer storage medium, is carrying out
The display of video will not stop the display of video;But the target object in picture frame can be tracked when carrying out video display
Display position, in this way after obtaining AR information, AR information superposition can be shown to currently according to the display position of acquisition
It, thus can be by AR information superposition target object or target object attachment, so as to reduce because of target in picture frame
The movement of object target object in the different images frame of video, cause AR information deviate target object the phenomenon that, to solve
AR information deviates the problem of corresponding target object, and does not have in obtaining AR information the display for stopping video, to wait AR
The display of information, the user experience is improved.
Detailed description of the invention
Fig. 1 is the flow diagram of the first AR processing method provided in an embodiment of the present invention;
Fig. 2 is the display of a kind of datum mark provided in an embodiment of the present invention, characteristic point, offset vector and mean-shift vector
Schematic diagram;
Fig. 3 is the flow diagram of second of AR processing method provided in an embodiment of the present invention;
Fig. 4 is a kind of video display effect schematic diagram provided in an embodiment of the present invention;
Fig. 5 is another video display effect schematic diagram provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of AR processing unit provided in an embodiment of the present invention;
Fig. 7 is the structural schematic diagram of another kind AR processing unit provided in an embodiment of the present invention;
Fig. 8 is another kind AR processing system provided in an embodiment of the present invention and processing flow schematic diagram.
Specific embodiment
Technical solution of the present invention is further described in detail with reference to the accompanying drawings and specific embodiments of the specification.
As shown in Figure 1, be applied in display terminal the present embodiment provides a kind of AR processing method, including:
Step S110:Based on video flowing, video is shown;
Step S120:Obtain the AR information of target object in the video;
Step S130:Track display position of the target object in the current image frame of the video currently shown
It sets;
Step S140:According to the display position, by the AR information superposition into the current image frame.
The present embodiment provides a kind of AR processing methods, for applied to the method in display terminal.Here display terminal can
There is the display terminal of display screen for various types, for example, mobile phone, tablet computer or wearable device et al. carry display eventually
End, can also be the various mobile units with display screen.Here display screen can be liquid crystal display, electric ink display screen
Or the various display screens such as projection display screen.
Display terminal can show video based on video flowing in step s 110.The video flowing includes:With display timing
Multiple image data flow.The video flowing can currently acquire formation in step s 110 for the display terminal, can also
Be it is received from other equipment, be also possible to be stored in advance in the display terminal, can be used for terminal and show to form video
Data flow.
The AR information of target object in the video can be obtained in the step s 120.Here AR information may include:It is described
The identification information of target object, classification information, various attribute informations and the location information of the target object position one
Kind is a variety of.Target object 1 and target object 2 are labeled in Fig. 4;Obvious target object 1 is people;Target object is vehicle.
The image of Overlapping display AR information is shown in Fig. 5;It include printed words in AR information shown in Fig. 5:Public transport and star A.Superposition
The AR information of display is all that neighbouring corresponding target object is shown.For example, the identification in the neighbouring present image of printed words " public transport "
Near the target object 1 of public transport, to reduce the phenomenon that AR information much deviates target object.
The identification information can be the information such as the title of the corresponding collected object of target object or identification sequence number.Example
Such as, there is the Drawing Object of a vehicle in one image, the identification information that the AR information may include in this embodiment can be finger
Show that the vehicle is:The text information of benz vehicle, then text " benz " is one kind of the identification information.
The information for the classification that the classification information can be belonged to for the reference target object, for example, indicating that the vehicle is
The text information of public transport, then cease.Text " public transport " can be used as one kind of the classification information.
The attribute information can characterize the information of each attribute of the target object.For example, identifying that target object is one
Bus, and identify the public transport number of the bus, then the attribute information may include:The route information of the bus.It is described
Public transport number can be another example of the identification information.
The location information may be used to indicate the approximate location that the target object is presently in, for example, if the target
Object is vehicle, then can be by image recognition and/or positioning, for example, global satellite system (GPS) positioning etc. provides the vehicle
Current location information.For example, can show that current vehicle is located at the location informations such as HaiDian South Road, Haidian District, BeiJing City.
The AR information can be text information, may be image information in the present embodiment, in a word AR information here
It can be information of the Overlapping display in present image.
In the present embodiment to the identification of the images to be recognized, can be known by the display terminal based on local image
The corresponding relationship of the image and predetermined information that store in other library carries out local identification, makees to extract the part predetermined information
It is shown for the AR information.
Target object may be different in the display position in the different images frame of video, can be tracked in the present embodiment
Target object is in the position of each picture frame of video, so that it is determined that target object is in the display position for currently spitting picture frame out.
The AR information superposition can be shown in target object with facilitating according to determining display position in step S140
Around upper or target object, avoid AR information superposition display position not to caused information offset problem, so as to be promoted
User experience.
In the present embodiment AR information can be carried out according to target object in the display position of current image frame in step S140
Display superposition, and superposition nonvoluntary.Here current image frame is the picture frame of current time display.
Here display position may include:The display positions such as coordinate of the target object in current image frame indicate information.
The display position that each target object each picture frame in video streaming can be tracked in step S140, facilitates institute
It states AR information superposition to be shown on target object or around target object, avoids AR information superposition display position not to caused
Information offset problem, so as to promote user experience.
The AR information can be added in the multiple image of video flowing, until the target object is from the video flowing
It disappears in picture frame.But require to redefine location parameter of the target object in current image frame when being superimposed every time, from
And it realizes the position that AR information follows the target object between picture frame and switches and switch;To avoid target object itself
In the movement of picture frame, the phenomenon that disengaging with AR information, to promote user experience again.
In some embodiments, the step S120 may include:
A frame can be intercepted or multiframe meets the images to be recognized for presetting clear condition.
Obtain the corresponding AR information of recognition result based on target object described in images to be recognized described in a frame or multiframe.
Judge whether a picture frame meets in the present embodiment and described preset clear condition, it may include:Extract corresponding diagram
As the profile information of frame, if profile information extracts successfully, it is believed that the picture frame is to meet the image for presetting clear condition.
It may also include in some embodiments:Calculate the gray scale of each pixel and surrounding pixel in picture frame
Difference, or there is default pixel and the gray scale difference of surrounding pixel is greater than preset threshold, it is believed that the picture frame can be
Meet the picture frame for presetting clear condition.
In some cases, the video flowing is received from other equipment, alternatively, being stored in advance in the display terminal
In, the picture frame of the video flowing is divided into key frame and non-key frame, can choose one or more passes in the present embodiment
Key frame is as meeting the images to be recognized for presetting clear condition.
Meet there are many kinds of the modes for presetting clear condition in video flowing in short, determining, is not limited to any one of the above.
Further include in some embodiments:
The images to be recognized is sent to the service platform of network side, is identified for service platform.
When carrying out image recognition in the present embodiment, any one Drawing Object all can be considered institute in the images to be recognized
It states target object to be identified, can also be only parts of images object as the target object.
It is noticeable there is no certain sequencing between the step S120 and step 130 in some embodiments,
The step S120 can be executed after the step S130, synchronous with the step S130 can also be executed.
Optionally, as shown in figure 3, the step S120 may include:
Step S122:The images to be recognized is sent to service platform, wherein the images to be recognized is used for the clothes
Business platform carries out image recognition, to obtain recognition result;
Step S123:Receive the AR information that the service platform is returned based on the recognition result.
In the present embodiment the service platform can be have one or more server formed the clothes of image recognition are provided
Business end.
Images to be recognized is sent to service platform by the display terminal in the present embodiment, i.e., the described client can be from view
A frame or multiple image are intercepted in frequency stream, service platform is sent to, image recognition is carried out by service platform.Institute in the present embodiment
The identification for stating service platform can be extensive identification.Here extensive is identified as the service platform can be in the images to be recognized
Any one identifiable Drawing Object is identified, thus realize comprehensive identification of each Drawing Object in images to be recognized,
There is provided AR information to as far as possible more.
Image recognition is carried out by the service platform, the load capacity of display terminal itself can be reduced in this way, is reduced to aobvious
Show the resource consumption and power consumption of terminal.If the display terminal is mobile display terminal, the mobile display can be extended eventually
Hold standby time to be measured.
In some embodiments, the step S120 may include:
Extract in the video flowing at least partly characteristics of image of image;
According to described image feature, determines whether the images to be recognized meets and described preset clear condition.
Described image feature may include in the present embodiment:The contour feature of each Drawing Object, line in images to be recognized
Manage feature and/or gray feature etc..These characteristics of image all can be the feature for carrying out image recognition.
The contour feature may include the Internal periphery etc. in the outer profile and outer profile of the image of some object, these
The information such as shape, the size of Drawing Object of profile description, it is convenient when carrying out image recognition, pass through the image with benchmark image
Matching obtains the AR information.
The textural characteristics can be used for describing the grey scale change gradient between adjacent profile, can be equally used for image knowledge
Not, it can be used for reflecting the information such as the material of target object.
The gray feature, can directly include gray value and shade of gray, and the sum of the grayscale values shade of gray can be used for mentioning
Take the contour feature and the textural characteristics etc..
In short, any one of the above is not limited to there are many described image features, for example, when the images to be recognized is
When color image, described image feature may also include:Indicate the feature of the color of each target object entirety or local color.
In some embodiments, described to extract in the video flowing at least partly characteristics of image of image, including:
Extract in the video flowing at least partly characteristic point of image, wherein the characteristic point is the of the first gray value
One pixel;The difference of second gray value of first gray value and neighbouring the second pixel of the first pixel meets default
Difference condition;
According to the number of the characteristic point, judgement meets the images to be recognized for presetting clear condition.
For example, the first gray value of the first pixel is A1, the second gray value of second pixel is B1;Meet institute
Stating default difference condition may include:The A1 and the B1 absolute value of the difference are not less than discrepancy threshold.If the first sum of the grayscale values
The gray scale of two gray values is sufficiently large, then the pixel can on the profile of image object pixel or high bright part it is highlighted
Point can be used as the important pixel of identification correspondence image object.
Second pixel can be the pixel in the first pixel neighborhood of a point in some embodiments.
In some embodiments, neighborhood is to be distinguished centered on first pixel to first direction and second direction
Extend the region that N number of pixel is formed, the pixel in the region can be the neighborhood.The N can be positive integer.
The first direction can be the vertical second direction.
The neighborhood described in some embodiments can be rectangular area, and the neighborhood can also be for centered on first pixel
Border circular areas is then second pixel positioned at the pixel of the border circular areas.
If there is apparent blooming in a picture frame, the gray scale difference meeting of each pixel in fuzzy region
Smaller, then the characteristic point occurred will be seldom.
It is described default clear can to determine whether a picture frame meets based on the quantity of the characteristic point in the present embodiment
Clear condition.For example, when the quantity of the characteristic point is greater than amount threshold, it is believed that corresponding picture frame meets described default clear
Clear condition.
In further embodiments, the method also includes:The distribution of quantity and characteristic point based on the characteristic point,
The distribution density for calculating each sub-regions characteristic point, when there are the distribution densities of M sub-regions to be greater than density threshold in an image
When value, then it is believed that the picture frame is to meet the picture frame for presetting clear condition.
In some embodiments, the step S140 may include:
Position the first position parameter of target object described in the previous image frame in the video flowing;
Based on the first position parameter, second position ginseng of the target object in the current image frame is searched for
Number.
In the present embodiment in order to reduce the display position for positioning each target object, by the way of tracking, it is based on mesh
The gradually changeable for marking movement of the object in video streaming in two adjacent images frame is incorporated in the first position ginseng of previous picture frame
Number, the second position parameter being located in current image frame, can reduce so each time to the positioning of target object be all
On entire picture in current image frame, calculation amount is reduced.
For example, the search range for searching for the target object is determined based on first position parameter, it is entire current without being used in
The target object is searched on picture frame, to reduce the calculation amount in search process.Specifically with first position parameter current
The corresponding marginal position of picture frame extends to the outside default pixel, as described search region;Then by by region of search with
The images match of target object in previous image frame orients second position ginseng of the target object in current image frame
Number.In this way, if after current search region does not search the target object, expand described search area again
Domain or change region of search, until searching entire current image frame, in this case, it is clear that some positions are mobile slow
Target object cracking can be decided to be the second position parameter out in current image frame.
Certainly the above is only a kind of modes that second position parameter is determined based on first position parameter, but are not limited to
State method.For example, the step S142 may include:
Based on the tracking in previous image frame to the target object, the benchmark of the target object in the current frame is determined
Point, wherein the datum mark is the pixel of display position of the characterization target object in previous image frame;
Determine the offset vector in each characteristic point of current image frame relative to the datum mark, wherein the feature
Point is the first pixel of the first gray value;Second ash of first gray value and neighbouring the second pixel of the first pixel
The difference of angle value meets default difference condition;
Based on the offset vector, mean-shift vector of each characteristic point relative to the datum mark is determined,
In, the mean-shift vector includes:Mean shift direction and mean shift amount;
Based on the datum mark and the mean-shift vector, the corresponding target point of the target object is positioned;Wherein, institute
The datum mark that target point is next image frame is stated, and corresponding with the second position parameter.
The datum mark can be center of the target object in previous image frame in the present embodiment, but not
It is confined to center etc..
Can extract each characteristic point in current image frame first in the present embodiment, characteristic point here be equally for its
The gray scale difference of the pixel of surrounding meets the pixel of preset condition.Building is using the datum mark as the direction of vector initial position
The offset vector of each characteristic point;The offset of each offset vector is obtained, the mean value of this offset is then sought, obtains institute
State the mean shift amount of mean-shift vector;In conjunction with each offset vector, the vector calculus in direction is carried out to can determine that
The corresponding mean shift direction of the mean-shift vector.Under normal conditions, the offset direction of the mean shift vector can refer to
The position high to feature dot density.The position of the target point in the present embodiment, then can be for using the datum mark as starting point
The terminal of the mean-shift vector.The target point can be used as the component part of the second position parameter;To next figure
It, can datum mark with the target point of current image frame, as next picture frame when carrying out the tracking of target object as frame;From
And the tracking that iterates is carried out, the mode of display position of this positioning target object in current image frame has calculation amount small
And realize easy feature.
As shown in Fig. 2, each evil mind real point indicates a characteristic point in Fig. 2, hollow origin indicates datum mark;Single line arrow
The offset vector that head indicates;Hollow arrow characterization is mean-shift vector, it is clear that mean-shift vector is from current benchmark
Point sets out, and is directed toward the higher region of characteristic point distribution density;The mean shift amount of mean-shift vector is equal to all offset vectors
Offset mean value.
In some embodiments, the target object is the Drawing Object being shown in the focus area of images to be recognized.
The acquisition of each frame image can be carried out by camera based on specific focal position, and each picture frame has
Its corresponding focus area;The image object for being usually located at focus area is clearest Drawing Object, is also user's weight simultaneously
The image object of point concern, in the present embodiment in order to reduce identification workload, the target object is is at least partially disposed at
State the Drawing Object in focus area.
In some embodiments, the method also includes:
During obtaining the AR information, acquisition prompt is shown on the picture of the video, wherein the acquisition
Prompt is currently obtaining the AR information for prompting.
In some cases, if the identification of images to be recognized needs to consume some times, currently also not in order to avoid user
Start to identify or identify failure, by the display for obtaining prompt, user is prompted to be currently at the acquisition of AR information
Cheng Zhong.The acquisition prompt can be text information, or image information.For example, can be to be shown in the current image frame
On translucent mask etc., further to promote user experience.
One, which is provided, below in conjunction with any one above-mentioned embodiment applies example:
As shown in figure 3, this example provides a kind of AR processing method, it can be using the various intelligence such as mobile phone, intelligent glasses eventually
In end, intelligent terminal needs to carry out AR information superposition, to promote display effect, specifically may include when acquiring video:
Step S110:Based on video flowing, video is shown;
Step S121:Extract the frame or multiframe images to be recognized for meeting in the video flowing and presetting clear condition;Wherein,
It include target object in the images to be recognized;
Step S122:Institute's truth images to be recognized is sent to service platform;
Step S123:Receive the AR information that the service platform is returned based on the recognition result to the images to be recognized;
Step S131:The target object is tracked in the display position of each picture frame, is existed to obtain the target object
The location parameter of current image frame;
Step S141:According to the location parameter, the AR information superposition is shown in the current image frame.
It is prompted by the dashed circle of multiple nestings as the acquisition in Fig. 4.
As shown in fig. 6, be applied in display terminal the present embodiment provides a kind of augmented reality AR processing unit, including:
Display unit 110 shows video for being based on video flowing;
Acquiring unit 120, for obtaining the AR information of target object in the video;
Tracking cell 130, for tracking the target object in the current image frame of the video currently shown
Display position;
The display unit 110 is also used to according to the display position, by the AR information superposition to the present image
In frame.
Display terminal provided in this embodiment can be the various terminals including display screen, and display screen here can be liquid crystal
Various types of display screens such as display screen, electric ink display screen or projection display screen.
The acquiring unit 120, acquiring unit 120 and tracking cell 130 correspond to processor or processing electricity in terminal
Road.The processor can be central processing unit (CPU), microprocessor (MCU), digital signal processor (DSP), application processor
(AP) or programmable array (PLC) etc..The processor circuit can be specific integrated circuit (ASIC).The processor or processing
Circuit can realize aforesaid operations by the execution of executable code.
In short, device provided in this embodiment can track the target object and regard when display terminal carries out AR display
Display position in frequency in each frame image, so that it is guaranteed that AR information superposition is shown in corresponding target object attachment, to reduce
The AR information superposition of target object A to target object B surrounding the phenomenon that, reduce AR information deviate target object the phenomenon that, mention
Rise user experience.
Optionally, the acquiring unit 120 meets the frame for presetting clear condition specifically for extracting in the video flowing
Or multiframe images to be recognized;It is corresponding to obtain the recognition result based on target object described in images to be recognized described in a frame or multiframe
AR information.
The acquiring unit 120, specifically for the images to be recognized is sent to service platform, wherein described wait know
Other image carries out image recognition for the service platform, to obtain recognition result;It receives the service platform and is based on the knowledge
The AR information that other result returns.
The AR information can be by information search from service platform, the service platform in the present embodiment, can
Information as much as possible is provided to the client, to reduce because of the caused AR information of terminal self information storage not enough
Not abundant enough or small information content problem.
Optionally, the acquiring unit 120, specifically for extracting the image spy of at least partly image in the video flowing
Sign;According to described image feature, determines whether the images to be recognized meets and described preset clear condition.
The acquiring unit 120 in the present embodiment is mainly used for the extraction by characteristics of image, selects a frame or multiframe
Enough clearly images, are sent to service platform or itself are identified, promote identification accuracy and identify successful probability.
Optionally, the acquiring unit 120, specifically for extracting in the video flowing at least partly characteristic point of image,
Wherein, the characteristic point is the first pixel of the first gray value;First gray value and the first pixel it is neighbouring second
The difference of second gray value of pixel meets default difference condition;According to the number of the characteristic point, judgement meets described pre-
If the images to be recognized of clear condition.
By the extraction of characteristic point in this institute embodiment, to determine to meet the images to be recognized for presetting clear condition,
For example, using the detection of FAST characteristic point.The FAST can be for Features from AcceleratedSegment Test's
Abbreviation.
In some embodiments, the tracking cell 130, specifically for positioning the previous image frame institute in the video flowing
State the first position parameter of target object;Based on the first position parameter, the target object is searched in the present image
Second position parameter in frame.
Relevance in the present embodiment based on two adjacent images frame position parameter obtains target object in present image
Second position parameter in frame reduces the calculation amount of positioning second position parameter.
Optionally, the tracking cell 130, specifically for based on the tracking in previous image frame to the target object,
Determine the datum mark of the target object in the current frame, wherein the datum mark is to characterize the target object in previous figure
As the pixel of the display position of frame;Determine each characteristic point offseting to relative to the datum mark in current image frame
Amount, wherein the characteristic point is the first pixel of the first gray value;First gray value and the first pixel it is neighbouring
The difference of second gray value of two pixels meets default difference condition;Based on the offset vector, each feature is determined
Mean-shift vector of the point relative to the datum mark, wherein the mean-shift vector includes:Mean shift direction and mean value
Offset;Based on the datum mark and the mean-shift vector, the corresponding target point of the target object is positioned;Wherein, institute
The datum mark that target point is next image frame is stated, and corresponding with the second position parameter.
The first position parameter may include the coordinate of the target point of previous image frame in the present embodiment;Second position ginseng
Number can be the coordinate of the target point of current image frame.By the determination of mean-shift vector, based on the target point of previous image frame,
Quickly orient the target point of current image frame.
Optionally, the target object is the Drawing Object being shown in the focus area of images to be recognized.It in this way can be with
The identification of unnecessary Drawing Object and the return of AR information are reduced, the display that not table wants graphical information is reduced, reduces to user's
Information interference.
Optionally, the display unit 110 is also used to during obtaining the AR information, in the picture of the video
Acquisition prompt is shown on face, wherein the acquisition prompt is currently obtaining the AR information for prompting.
In the present embodiment by the display for obtaining prompt, user can be prompted currently to obtain AR information, subtracted
Few anxiety state of the user in waiting process, the user experience is improved again.
As shown in fig. 7, the present embodiment provides a kind of display terminals, including:
Display 210 is shown for information;
Memory 220, for storing computer program;
Processor 230 is connect with the display and the memory, for by executing the computer program, control
It makes the display terminal and executes the AR processing method that any one aforementioned embodiment provides, for example, at the AR that such as Fig. 1 is provided
Reason method etc..
In the present embodiment the display 210 can for various types of displays, liquid crystal display, the projection display or
Electronic ink display etc..
The memory 220 can be various types of storage mediums, for example, random storage medium, read-only storage medium, sudden strain of a muscle
It deposits or optical disc etc..The memory 220 includes at least the non-moment storage medium in part in the present embodiment, and non-moment here deposits
Storage media can be used for storing the computer program.
The processor 230 can be various processors or the processing circuits such as CPU, MCU, DSP, AP or PLC or ASIC, can lead to
The execution of computer program is crossed, the Overlapping display AR information in the current image frame of video of display 210 is executed.
As shown in fig. 7, the terminal display 210, memory 220, processor 230 are connected by bus 250, it is described
Bus 250 may include can such as IC bus (IIC) bus or Peripheral Component Interconnect standard (PCI) bus.
The client may also include network interface 240 in some embodiments, which can be used for being connected to
Network side is connect with the service platform.
The present embodiment also provides a kind of computer storage medium, stores computer program in the computer storage medium;
The computer program, the AR processing method provided for can be realized any one aforementioned embodiment after being executed by processor,
For example, AR processing method etc. as shown in Figure 1.
The computer storage medium can be various types of storage mediums, be chosen as non-moment storage medium.The meter
Calculation machine storage medium is chosen as movable storage device, read-only memory (ROM, Read-OnlyMemory), random access memory
Device (RAM, Random Access Memory), magnetic or disk etc. are various to can store the media such as program code.
A specific example is provided below in conjunction with above-mentioned any embodiment:
As shown in figure 8, this example provides a kind of AR processing system, including:
Client and server end;
The client can be the terminal of display AR information;
The server end can provide the AR service platform for the network side that processing is supported for the client.
The client, including:
Kernel module can be used for carrying out the information processing on backstage corresponding to the kernel of operating system, for example, can pass through
With the interaction of service platform, AR information is obtained;
AR engine (SDK) is sent out corresponding location parameter based on location information for obtaining the location information of target object
Give display screen;
Display screen corresponds to display user interface, can be used for video and show, will based on the location parameter that ARSDK is provided
The AR information forwarded from kernel module, AR information is correctly added near corresponding target object.
The server end may include:Proxy server, identification server and search server;
The proxy server, for carrying out information exchange with client, for example, receiving the figure to be identified that client is sent
Picture;
The identification server is connect with the proxy server, the figure to be identified for the forwarding of Receiving Agent server
Then recognition result is given to search server by picture;
Described search server is connected with identification server, for being based on search result, inquiry AR information, and by the AR
Information is sent to client by proxy server.
A kind of application method of AR information in above system is provided in detail below, including:
Object outdoor scene scan tracing is the process for having terminal display after terminal long pass real time picture cloud identifies again.Originally show
The backstage of terminal is connect with the proxy server of the identification server in cloud, search server and progress information integration in example.Eventually
The data transmission unit and carry out showing the UI unit interacted with user that end transmit comprising ARSDK, terminal and network platform data.
Detailed process is as follows
Terminal passes through application and opens camera, and video flowing can import terminal network transmission unit, network by UI unit at this time
Transmission unit is detected by FAST characteristic point, the quantity of FAST characteristic point can represent the object in picture whether have it is enough
Identification condition, the image for meeting characteristic point requirement is the image of clear condition default enough, then we are passed to this frame image
Background proxy server.
Background proxy server receives terminal uploading pictures, this picture is sent to cloud identification server, cloud identification
Server will do it the extensive identification of image, out not Chu objects in images classification, position, number information, be then passed to background proxy
Server
After background proxy server takes image category information, consulting search center server is sent information to, is gone
This object category correlation consultation information is fished for, if directly returned not as relevent information information empty.Proxy server at this time
The information of image and consultation information are being sent to terminal.Here relevent information information is one kind of AR information above-mentioned.
Terminal has module reception, draws if information is directly transmitted to UI by number, due to network transmission and identification consumption
When, object of which movement current at this time may change, if drawn according further to the position for uploading that frame picture, it is likely that occur
Draw offset.So data transmission module is not information to be thrust to UI, and be intended for ARSDK and go to obtain position more at this time
Newly.
The image frame data of transmission module is transmitted to and locally follows module by ARSDK, and module is locally followed to use average drifting
Algorithm first calculates the offset mean value of first frame pictorial feature point point, then as new starting point, investigates and prosecutes look for next frame again
Image character pair point shift position, can follow always objects in images, can obtain the location information of object in real time, at this time
The object that kernel transmits is received, the latest position of this object can be transmitted to kernel module.
Object latest position information has been arrived in kernel update, and consultation information, classification information and location information are transmitted to UI together
Unit, after UI unit receives relevant information, side can draw on the screen and identify.And position can be accurate.
Specifically as shown in figure 8, the AR information processing method may include:
Step 1:Display screen provides video flowing to kernel module;
Step 2:Kernel module provides images to be recognized to proxy server;
Step 3:Proxy server provides images to be recognized to identification server;
Step 4:Identify that recognition result is given to proxy server by server, in some embodiments the identification server
Can the recognition result directly be fed back to search server;
Step 5:Recognition result is sent to search server by proxy server;
Step 6:The AR information searched based on recognition result is returned to proxy server by search server;
Step 7:AR information is transmitted to the kernel module of client by proxy server;
Step 8:Kernel module obtains new location parameter to ARSDK;Here location parameter is target object current
Location parameter in picture frame;
Step 9:The location parameter of update is sent to kernel module by ARSDK;
Step 10:AR information and updated location parameter are returned to display screen by kernel module, can display screen showing
While video, the Overlapping display AR information near target object.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, such as:Multiple units or components can combine, or
It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion
Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit
Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated into a processing module, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (19)
1. a kind of augmented reality AR processing method, which is characterized in that it is applied in display terminal, including:
Based on video flowing, video is shown;
Obtain the AR information of target object in the video;
Track display position of the target object in the current image frame of the video currently shown;
According to the display position, by the AR information superposition into the current image frame.
2. the method according to claim 1, wherein
The AR information for obtaining target object in the video, including:
Extract the frame or multiframe images to be recognized for meeting in the video flowing and presetting clear condition;
Obtain the corresponding AR information of recognition result based on target object described in images to be recognized described in a frame or multiframe.
3. according to the method described in claim 2, it is characterized in that,
The recognition result corresponding AR information of the acquisition based on target object in images to be recognized described in a frame or multiframe, packet
It includes:
The images to be recognized is sent to service platform, wherein the images to be recognized carries out figure for the service platform
As identification, to obtain recognition result;
Receive the AR information that the service platform is returned based on the recognition result.
4. according to the method described in claim 2, it is characterized in that,
The frame or multiframe images to be recognized for extracting satisfaction in the video flowing and presetting clear condition, including:
Extract in the video flowing at least partly characteristic point of image, wherein the characteristic point is the first picture of the first gray value
Vegetarian refreshments;The difference of second gray value of first gray value and neighbouring the second pixel of the first pixel meets default difference
Condition;
According to the number of the characteristic point, judgement meets the images to be recognized for presetting clear condition.
5. method according to any one of claims 1 to 3, which is characterized in that
Display position of the tracking target object in the current image frame of the video currently shown, including:
The display position of the target object each picture frame in the video is tracked, to obtain the target object current
The location parameter of the current image frame of display;
It is described according to the display position, by the AR information superposition into the picture frame of video, including
According to the location parameter, the AR information superposition is shown in the current image frame.
6. according to the method described in claim 5, it is characterized in that,
The display position of tracking target object each picture frame in the video, exists to obtain the target object
The location parameter of the current image frame currently shown, including:
Position the first position parameter of target object described in the previous image frame in the video flowing;
Based on the first position parameter, second position parameter of the target object in the current image frame is searched for.
7. according to the method described in claim 6, it is characterized in that,
It is described to be based on the first position parameter, search for second position ginseng of the target object in the current image frame
Number, including:
Based on the tracking in previous image frame to the target object, the datum mark of the target object in the current frame is determined,
Wherein, the datum mark is the pixel of display position of the characterization target object in previous image frame;
Determine offset vector of each characteristic point relative to the datum mark in current image frame, wherein the characteristic point is
First pixel of the first gray value;Second gray value of first gray value and neighbouring the second pixel of the first pixel
Difference meet default difference condition;
Based on the offset vector, mean-shift vector of each characteristic point relative to the datum mark is determined, wherein institute
Stating mean-shift vector includes:Mean shift direction and mean shift amount;
Based on the datum mark and the mean-shift vector, the corresponding target point of the target object is positioned;Wherein, the mesh
Punctuate is the datum mark of next image frame, and corresponding with the second position parameter.
8. method according to any one of claims 1 to 3, which is characterized in that
The target object is the Drawing Object being shown in the focus area of images to be recognized.
9. method according to any one of claims 1 to 3, which is characterized in that
The method also includes:
During obtaining the AR information, acquisition prompt is shown on the picture of the video, wherein the acquisition mentions
Show, is currently obtaining the AR information for prompting.
10. a kind of augmented reality AR processing unit, which is characterized in that it is applied in display terminal, including:
Display unit shows video for being based on video flowing;
Acquiring unit, for obtaining the AR information of target object in the video;
Tracking cell, for tracking display position of the target object in the current image frame of the video currently shown
It sets;
The display unit is also used to according to the display position, by the AR information superposition into the current image frame.
11. device according to claim 10, which is characterized in that
The acquiring unit meets a frame or multiframe figure to be identified for presetting clear condition specifically for extracting in the video flowing
Picture;Obtain the corresponding AR information of recognition result based on target object described in images to be recognized described in a frame or multiframe.
12. device according to claim 11, which is characterized in that
The acquiring unit, specifically for the images to be recognized is sent to service platform, wherein the images to be recognized is used
Image recognition is carried out in the service platform, to obtain recognition result;The service platform is received to return based on the recognition result
The AR information returned.
13. device according to claim 11, which is characterized in that
The acquiring unit, specifically for extracting in the video flowing at least partly characteristic point of image, wherein the characteristic point
For the first pixel of the first gray value;Second gray scale of first gray value and neighbouring the second pixel of the first pixel
The difference of value meets default difference condition;And the number according to the characteristic point, judgement meet the institute for presetting clear condition
State images to be recognized.
14. device according to any one of claims 10 to 13, which is characterized in that
The tracking cell, specifically for tracking the display position of target object each picture frame in the video, thus
The target object is obtained in the location parameter of the current image frame currently shown;
The display unit is specifically used for being shown the AR information superposition in the present image according to the location parameter
In frame.
15. device according to claim 14, which is characterized in that
The tracking cell, specifically for determining the target pair based on the tracking in previous image frame to the target object
As datum mark in the current frame, wherein the datum mark is the characterization target object in the display position of previous image frame
Pixel;Determine the offset vector in each characteristic point of current image frame relative to the datum mark, wherein the feature
Point is the first pixel of the first gray value;Second ash of first gray value and neighbouring the second pixel of the first pixel
The difference of angle value meets default difference condition;Based on the offset vector, determine each characteristic point relative to the benchmark
The mean-shift vector of point, wherein the mean-shift vector includes:Mean shift direction and mean shift amount;Based on described
Datum mark and the mean-shift vector position the corresponding target point of the target object;Wherein, the target point is next figure
As the datum mark of frame, and it is corresponding with the second position parameter.
16. device according to any one of claims 10 to 13, which is characterized in that
The target object is the Drawing Object being shown in the focus area of images to be recognized.
17. device according to any one of claims 10 to 13, which is characterized in that
The display unit is also used to during obtaining the AR information, shows that acquisition mentions on the picture of the video
Show, wherein the acquisition prompt is currently obtaining the AR information for prompting.
18. a kind of display terminal, which is characterized in that including:
Display is shown for information;
Memory, for storing computer program;
Processor is connect with the display and the memory, for controlling described aobvious by executing the computer program
Show that terminal perform claim requires 1 to 9 described in any item AR processing methods.
19. a kind of computer storage medium, computer program is stored in the computer storage medium;The computer program,
For can be realized the described in any item AR processing methods of claim 1 to 9 after being executed by processor.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710340898.0A CN108875460B (en) | 2017-05-15 | 2017-05-15 | Augmented reality processing method and device, display terminal and computer storage medium |
PCT/CN2018/080094 WO2018210055A1 (en) | 2017-05-15 | 2018-03-22 | Augmented reality processing method and device, display terminal, and computer storage medium |
TW107111026A TWI669956B (en) | 2017-05-15 | 2018-03-29 | Method, device, display terminal, and storage medium for processing augumented reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710340898.0A CN108875460B (en) | 2017-05-15 | 2017-05-15 | Augmented reality processing method and device, display terminal and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108875460A true CN108875460A (en) | 2018-11-23 |
CN108875460B CN108875460B (en) | 2023-06-20 |
Family
ID=64273268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710340898.0A Active CN108875460B (en) | 2017-05-15 | 2017-05-15 | Augmented reality processing method and device, display terminal and computer storage medium |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN108875460B (en) |
TW (1) | TWI669956B (en) |
WO (1) | WO2018210055A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583329A (en) * | 2020-04-09 | 2020-08-25 | 深圳奇迹智慧网络有限公司 | Augmented reality glasses display method and device, electronic equipment and storage medium |
CN112017300A (en) * | 2020-07-22 | 2020-12-01 | 青岛小鸟看看科技有限公司 | Processing method, device and equipment for mixed reality image |
CN112328628A (en) * | 2020-11-10 | 2021-02-05 | 山东爱城市网信息技术有限公司 | Bus real-time query method and system based on AR technology |
CN112445318A (en) * | 2019-08-30 | 2021-03-05 | 龙芯中科技术股份有限公司 | Object display method and device, electronic equipment and storage medium |
CN112583976A (en) * | 2020-12-29 | 2021-03-30 | 咪咕文化科技有限公司 | Graphic code display method, equipment and readable storage medium |
CN113386785A (en) * | 2019-07-03 | 2021-09-14 | 北京百度网讯科技有限公司 | Method and apparatus for displaying augmented reality alert information |
CN114415839A (en) * | 2022-01-27 | 2022-04-29 | 歌尔科技有限公司 | Information display method, device, equipment and storage medium |
WO2022179311A1 (en) * | 2021-02-26 | 2022-09-01 | 维沃移动通信有限公司 | Display method and apparatus, and electronic device |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110619266B (en) * | 2019-08-02 | 2024-01-19 | 青岛海尔智能技术研发有限公司 | Target object identification method and device and refrigerator |
CN110856005B (en) * | 2019-11-07 | 2021-09-21 | 广州虎牙科技有限公司 | Live stream display method and device, electronic equipment and readable storage medium |
CN110784733B (en) * | 2019-11-07 | 2021-06-25 | 广州虎牙科技有限公司 | Live broadcast data processing method and device, electronic equipment and readable storage medium |
CN112734938A (en) * | 2021-01-12 | 2021-04-30 | 北京爱笔科技有限公司 | Pedestrian position prediction method, device, computer equipment and storage medium |
CN113596350B (en) * | 2021-07-27 | 2023-11-17 | 深圳传音控股股份有限公司 | Image processing method, mobile terminal and readable storage medium |
CN115361576A (en) * | 2022-07-20 | 2022-11-18 | 中国电信股份有限公司 | Video data processing method and device, and electronic equipment |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101750017A (en) * | 2010-01-18 | 2010-06-23 | 战强 | Visual detection method of multi-movement target positions in large view field |
CN101807300A (en) * | 2010-03-05 | 2010-08-18 | 北京智安邦科技有限公司 | Target fragment region merging method and device |
CN103426184A (en) * | 2013-08-01 | 2013-12-04 | 华为技术有限公司 | Optical flow tracking method and device |
US20150199848A1 (en) * | 2014-01-16 | 2015-07-16 | Lg Electronics Inc. | Portable device for tracking user gaze to provide augmented reality display |
CN104823152A (en) * | 2012-12-19 | 2015-08-05 | 高通股份有限公司 | Enabling augmented reality using eye gaze tracking |
CN105635712A (en) * | 2015-12-30 | 2016-06-01 | 视辰信息科技(上海)有限公司 | Augmented-reality-based real-time video recording method and recording equipment |
CN105654512A (en) * | 2015-12-29 | 2016-06-08 | 深圳羚羊微服机器人科技有限公司 | Target tracking method and device |
CN105760826A (en) * | 2016-02-03 | 2016-07-13 | 歌尔声学股份有限公司 | Face tracking method and device and intelligent terminal. |
CN106056046A (en) * | 2016-05-20 | 2016-10-26 | 北京集创北方科技股份有限公司 | Method and device of extracting features from image |
CN106250938A (en) * | 2016-07-19 | 2016-12-21 | 易视腾科技股份有限公司 | Method for tracking target, augmented reality method and device thereof |
CN106371585A (en) * | 2016-08-23 | 2017-02-01 | 塔普翊海(上海)智能科技有限公司 | Augmented reality system and method |
CN106454350A (en) * | 2016-06-28 | 2017-02-22 | 中国人民解放军陆军军官学院 | Non-reference evaluation method for infrared image |
CN106650965A (en) * | 2016-12-30 | 2017-05-10 | 触景无限科技(北京)有限公司 | Remote video processing method and apparatus |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103119627B (en) * | 2010-09-20 | 2017-03-08 | 高通股份有限公司 | Adaptability framework for cloud assist type Augmented Reality |
KR101338818B1 (en) * | 2010-11-29 | 2013-12-06 | 주식회사 팬택 | Mobile terminal and information display method using the same |
JP2015529911A (en) * | 2012-09-28 | 2015-10-08 | インテル コーポレイション | Determination of augmented reality information |
CN105103198A (en) * | 2013-04-04 | 2015-11-25 | 索尼公司 | Display control device, display control method and program |
CN104936034B (en) * | 2015-06-11 | 2019-07-05 | 三星电子(中国)研发中心 | Information input method and device based on video |
CN105760849B (en) * | 2016-03-09 | 2019-01-29 | 北京工业大学 | Target object behavioral data acquisition methods and device based on video |
-
2017
- 2017-05-15 CN CN201710340898.0A patent/CN108875460B/en active Active
-
2018
- 2018-03-22 WO PCT/CN2018/080094 patent/WO2018210055A1/en active Application Filing
- 2018-03-29 TW TW107111026A patent/TWI669956B/en active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101750017A (en) * | 2010-01-18 | 2010-06-23 | 战强 | Visual detection method of multi-movement target positions in large view field |
CN101807300A (en) * | 2010-03-05 | 2010-08-18 | 北京智安邦科技有限公司 | Target fragment region merging method and device |
CN104823152A (en) * | 2012-12-19 | 2015-08-05 | 高通股份有限公司 | Enabling augmented reality using eye gaze tracking |
CN103426184A (en) * | 2013-08-01 | 2013-12-04 | 华为技术有限公司 | Optical flow tracking method and device |
US20150199848A1 (en) * | 2014-01-16 | 2015-07-16 | Lg Electronics Inc. | Portable device for tracking user gaze to provide augmented reality display |
CN105654512A (en) * | 2015-12-29 | 2016-06-08 | 深圳羚羊微服机器人科技有限公司 | Target tracking method and device |
CN105635712A (en) * | 2015-12-30 | 2016-06-01 | 视辰信息科技(上海)有限公司 | Augmented-reality-based real-time video recording method and recording equipment |
CN105760826A (en) * | 2016-02-03 | 2016-07-13 | 歌尔声学股份有限公司 | Face tracking method and device and intelligent terminal. |
CN106056046A (en) * | 2016-05-20 | 2016-10-26 | 北京集创北方科技股份有限公司 | Method and device of extracting features from image |
CN106454350A (en) * | 2016-06-28 | 2017-02-22 | 中国人民解放军陆军军官学院 | Non-reference evaluation method for infrared image |
CN106250938A (en) * | 2016-07-19 | 2016-12-21 | 易视腾科技股份有限公司 | Method for tracking target, augmented reality method and device thereof |
CN106371585A (en) * | 2016-08-23 | 2017-02-01 | 塔普翊海(上海)智能科技有限公司 | Augmented reality system and method |
CN106650965A (en) * | 2016-12-30 | 2017-05-10 | 触景无限科技(北京)有限公司 | Remote video processing method and apparatus |
Non-Patent Citations (1)
Title |
---|
陈智翔;吴黎明;高世平;: "基于FAST-SURF算法的移动增强现实跟踪技术", 计算机与现代化, no. 09, pages 109 - 112 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113386785A (en) * | 2019-07-03 | 2021-09-14 | 北京百度网讯科技有限公司 | Method and apparatus for displaying augmented reality alert information |
CN112445318A (en) * | 2019-08-30 | 2021-03-05 | 龙芯中科技术股份有限公司 | Object display method and device, electronic equipment and storage medium |
CN111583329A (en) * | 2020-04-09 | 2020-08-25 | 深圳奇迹智慧网络有限公司 | Augmented reality glasses display method and device, electronic equipment and storage medium |
CN111583329B (en) * | 2020-04-09 | 2023-08-04 | 深圳奇迹智慧网络有限公司 | Augmented reality glasses display method and device, electronic equipment and storage medium |
CN112017300A (en) * | 2020-07-22 | 2020-12-01 | 青岛小鸟看看科技有限公司 | Processing method, device and equipment for mixed reality image |
CN112328628A (en) * | 2020-11-10 | 2021-02-05 | 山东爱城市网信息技术有限公司 | Bus real-time query method and system based on AR technology |
CN112583976A (en) * | 2020-12-29 | 2021-03-30 | 咪咕文化科技有限公司 | Graphic code display method, equipment and readable storage medium |
CN112583976B (en) * | 2020-12-29 | 2022-02-18 | 咪咕文化科技有限公司 | Graphic code display method, equipment and readable storage medium |
WO2022179311A1 (en) * | 2021-02-26 | 2022-09-01 | 维沃移动通信有限公司 | Display method and apparatus, and electronic device |
CN114415839A (en) * | 2022-01-27 | 2022-04-29 | 歌尔科技有限公司 | Information display method, device, equipment and storage medium |
WO2023142265A1 (en) * | 2022-01-27 | 2023-08-03 | 歌尔股份有限公司 | Information display method and apparatus, device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2018210055A1 (en) | 2018-11-22 |
TWI669956B (en) | 2019-08-21 |
TW201902225A (en) | 2019-01-01 |
CN108875460B (en) | 2023-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875460A (en) | Augmented reality processing method and processing device, display terminal and computer storage medium | |
US11657609B2 (en) | Terminal device, information processing device, object identifying method, program, and object identifying system | |
CN104285244B (en) | The method and apparatus of view management for the image-driven of mark | |
US10122888B2 (en) | Information processing system, terminal device and method of controlling display of secure data using augmented reality | |
CN203276350U (en) | Information processing apparatus | |
US9754183B2 (en) | System and method for providing additional information using image matching | |
CN106197445B (en) | A kind of method and device of route planning | |
EP2791883A2 (en) | Information processing device, information processing method and program | |
US20120027305A1 (en) | Apparatus to provide guide for augmented reality object recognition and method thereof | |
CN108932051A (en) | augmented reality image processing method, device and storage medium | |
CN108494836A (en) | Information-pushing method, device and equipment | |
CN102147665B (en) | Method and device for displaying information in input process and input method system | |
CN105988790B (en) | Information processing method, sending terminal and receiving terminal | |
CN107395780A (en) | Social communication method, apparatus and computer-processing equipment based on recognition of face | |
WO2017067810A1 (en) | Methods of detecting and managing a fiducial marker displayed on a display device | |
JP7103229B2 (en) | Suspiciousness estimation model generator | |
Deffeyes | Mobile augmented reality in the data center | |
CN105611108A (en) | Information processing method and electronic equipment | |
CN112767452B (en) | Active sensing method and system for camera | |
US11733842B2 (en) | Electronic device and control method thereof for determining displayed content based on user presence and interest | |
CN107491778A (en) | A kind of screen of intelligent device extracting method and system based on positioning image | |
CN109145681A (en) | For judging the method and device of target direction of rotation | |
JP5998952B2 (en) | Sign image placement support apparatus and program | |
CN109062403B (en) | PDA equipment | |
JP2015212967A (en) | Terminal device, object identification method and information processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1256620 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |