CN105279172A - Video matching method and device - Google Patents

Video matching method and device Download PDF

Info

Publication number
CN105279172A
CN105279172A CN201410306847.2A CN201410306847A CN105279172A CN 105279172 A CN105279172 A CN 105279172A CN 201410306847 A CN201410306847 A CN 201410306847A CN 105279172 A CN105279172 A CN 105279172A
Authority
CN
China
Prior art keywords
participle
unattached
metadata
video
weight factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410306847.2A
Other languages
Chinese (zh)
Other versions
CN105279172B (en
Inventor
邹伟华
高洪金
刘撑乾
杜亚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WELLAV TECHNOLOGIES Ltd
Original Assignee
Huizhou Wellav Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Wellav Technologies Co ltd filed Critical Huizhou Wellav Technologies Co ltd
Priority to CN201410306847.2A priority Critical patent/CN105279172B/en
Publication of CN105279172A publication Critical patent/CN105279172A/en
Application granted granted Critical
Publication of CN105279172B publication Critical patent/CN105279172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a video matching method. The method comprises the following steps: obtaining the first descriptive metadata of a video to be matched; determining the independent participles in the first descriptive metadata and calculating the first weight factor corresponding to each independent participle; carrying out synonym analysis on the independent participles in the first descriptive metadata and the independent participles in the second descriptive metadata of pre-stored videos, and determining the matching degree of the video to be matched and the pre-stored videos according to the first weight factors and the second weight factors corresponding to the independent participles in the second descriptive metadata; and outputting a preset number of pre-stored videos with the highest matching degree as the result. The invention correspondingly discloses a video matching device. By adopting the technical scheme, the input video to be matched can be matched with the pre-stored videos to obtain similar videos, so as to improve the accuracy and speed.

Description

Video matching method and device
Technical field
The present invention relates to technical field of the computer network, particularly relate to a kind of video matching method and device.
Background technology
Along with developing rapidly of Internet technology, the video resource on network is more and more abundanter, and people can send request with browsing video by network to server at any time.But along with the video of server stores gets more and more, how effectively organizing video resource more and more becomes an important problem.Such as, in some scenes, user is to video website server request program request video, how quick server is recommends similar video to user, or the video that how browsed according to user server is, recommend similar video to user, server matches similar video function whether quick, be related to video website server and whether can attract user and improve Consumer's Experience.And some video matching methods at present, usually have coupling inaccurate, process slower technical matters.
Summary of the invention
Based on this, be necessary to provide a kind of video matching method and device, according to the video to be matched of input, can carry out mating with the video that prestores and obtain the video similar to it, improve accuracy and speed.
A kind of video matching method, comprising:
Obtain the first description metadata of video to be matched;
Determine the unattached participle comprised in described first description metadata, and calculate the first weight factor corresponding to each unattached participle;
By the unattached participle in described first description metadata, with the unattached participle in the second description metadata of the video that prestores, carry out near synonym analysis, and second weight factor corresponding according to unattached participle in described first weight factor and described second description metadata, determine the matching degree of described video to be matched and the described video that prestores;
The preset number that output matching degree is the highest prestore video as a result.
In one embodiment, before the step of the first description metadata obtaining video to be matched, described method also comprises:
Prestore described in pre-determining video the second description metadata in the unattached participle that comprises, and calculate the second weight factor corresponding to each unattached participle, and store described unattached participle and described second weight factor.
In one embodiment, the described unattached participle determining to comprise in described first description metadata, and calculate the step of the first weight factor corresponding to each unattached participle, comprising:
Word segmentation processing is carried out to each metadata item in described first description metadata, obtains metadata item participle;
Merge the identical participle in all described metadata items, obtain described unattached participle;
According to the default weight of metadata item participle corresponding to each described unattached participle, calculate the first weight factor that each described unattached participle is corresponding;
The described unattached participle determining to comprise in described second description metadata, and calculate the step of the second weight factor corresponding to each unattached participle, comprising:
Word segmentation processing is carried out to each metadata item in described second description metadata, obtains metadata item participle;
Merge the identical participle in all described metadata items, obtain described unattached participle;
According to the default weight of metadata item participle corresponding to each described unattached participle, calculate the second weight factor that each described unattached participle is corresponding.
In one embodiment, described, word segmentation processing is carried out to each metadata item in described first description metadata, obtain the step of metadata item participle, and described word segmentation processing is carried out to each metadata item in described second description metadata, obtain, in the step of metadata item participle, comprising:
Remove in metadata item participle without semantic participle.
In one embodiment, described method also comprises:
According to UNICODE coding, the unattached participle comprised in described first description metadata is sorted; And
According to UNICODE coding, the unattached participle comprised in described second description metadata is sorted.
A kind of video matching device, comprising:
Load module, for obtaining the first description metadata of video to be matched;
Analysis module, for determining the unattached participle comprised in described first description metadata, and calculates the first weight factor corresponding to each unattached participle;
Memory module, for storing the second weight factor of unattached participle in the second description metadata of the video that prestores and correspondence;
Matching module, for by the unattached participle in described first description metadata, with the unattached participle in the second description metadata of the video that prestores, carry out near synonym analysis, and second weight factor corresponding according to unattached participle in described first weight factor and described second description metadata, determine the matching degree of described video to be matched and the described video that prestores;
Output module, prestores video as a result for the highest preset number of output matching degree.
In one embodiment, described analysis module, also for the unattached participle that comprises in the second description metadata of the video that prestores described in pre-determining, and calculates the second weight factor corresponding to each unattached participle.
In one embodiment, described analysis module, for carrying out word segmentation processing to each metadata item in described first description metadata, obtains metadata item participle; And the identical participle merged in all described metadata items, obtain described unattached participle; And according to the default weight of metadata item participle corresponding to each described unattached participle, calculate the first weight factor that each described unattached participle is corresponding;
Described analysis module, also for carrying out word segmentation processing to each metadata item in described second description metadata, obtains metadata item participle; And the identical participle merged in all described metadata items, obtain described unattached participle; And according to the default weight of metadata item participle corresponding to each described unattached participle, calculate the second weight factor that each described unattached participle is corresponding.
In one embodiment, described analysis module, also for remove in metadata item participle without semantic participle.
In one embodiment, described analysis module, also for sorting to the unattached participle comprised in described first description metadata according to UNICODE coding; And according to UNICODE coding, the unattached participle comprised in described second description metadata is sorted.
Above-mentioned video matching method and device, obtain the first description metadata of video to be matched, determine the unattached participle wherein comprised, and calculate the first weight factor of each unattached participle.Again by the unattached participle of the unattached participle in the first description metadata with the video that prestores, carry out near synonym analysis, determine matching degree, preset number the highest for matching degree prestored video frequency output as a result, thus can according to the video to be matched of input, carry out mating with the video that prestores and obtain the video similar to it, improve accuracy and speed.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the video matching method in an embodiment;
Fig. 2 is the structural representation of the video matching device in an embodiment.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
See Fig. 1, in one embodiment, a kind of video matching method is provided.The method can be, but not limited to application with ordering server, after user's program request one video or user browse a video, the video of program request can mate with the video prestored based on the method by this ordering server, obtains similar some videos, and recommends user.This video matching method comprises:
Step 101, obtains the first description metadata of video to be matched.
Concrete, in the present embodiment, description metadata refers to the information be described video, such as film title, director's title, actor information etc.Some videos such as description metadata of film can be made by making side in advance, and the description metadata of some videos needs artificial mark, thus sets up the mapping relations of video and description metadata.Determine the video of user's program request at ordering server after, using this video as video to be matched, and obtain the first corresponding description metadata according to mapping relations.
Step 102, determines the unattached participle comprised in the first description metadata, and calculates the first weight factor corresponding to each unattached participle.
Concrete, ordering server processes after obtaining the first description metadata in the following manner:
A () carries out word segmentation processing to each metadata item in the first description metadata, obtain metadata item participle.
By the metadata item that Data Placement in description metadata is different, such as, can comprise title, keyword, summary, classification etc.Wherein, title can be film title, video name etc., and keyword can be, but not limited to the information such as director, performer, and summary can comprise the key words that substance film is introduced, and classification provides the classification etc. of film.Each metadata item is carried out participle to classify process according to language.If english information, owing to there being natural separator (space) between English word, participle can be carried out according to separator to metadata item.And if Chinese information, also need to carry out semantic analysis, thus carry out word segmentation processing according to semanteme, semanteme here with reference to conventional art, can not limit at this.Thus, after word segmentation processing is carried out to each metadata item, title participle { X1, X2......}, summary participle { Y1, Y2......} etc. can be obtained.
For video " mad primitive man ", suppose that its first description metadata only comprises title and summary.Wherein title is: mad primitive man, and making a summary is: primitive man's Crewe moral one six mouthfuls is lived under the protection of daddy.After then carrying out word segmentation processing, title participle comprises { mad, primitive man }, summary participle comprises primitive man, Crewe moral, one, six, mouth, daddy, protect, under, life.
Optionally, in the present embodiment, can remove further in metadata item participle without semantic participle, thus the data volume of process can be reduced, improve processing speed.Such as in the example of " mad primitive man ", can remove " ", " " etc. participle.
B () merges the identical participle in all metadata items, obtain unattached participle.
Concrete, a unattached participle is merged into the identical participle in all metadata items.
For above-mentioned " mad primitive man ", title participle " primitive man " and summary participle " primitive man " merge into a unattached participle.Title participle " madness " can as a unattached participle ...
C (), according to the default weight of metadata item participle corresponding to each unattached participle, calculates the first weight factor that each described unattached participle is corresponding.
In the present embodiment, according to the empirical value of the relative importance of different metadata item, for weight is preset in dissimilar metadata item participle definition one.Using cumulative for the default weight of each metadata item participle corresponding for unattached participle as weight.Again according to the weight of unattached participle and the ratio of the total weight of all unattached participles, determine the first weight factor that unattached participle is corresponding.
Same for above-mentioned " mad primitive man ", the default weight of title participle and summary participle can be set to 70 and 30 respectively, adds up to 100.After then merging participle, the weight of unattached participle " primitive man " is 100 (70+30), the weight of unattached participle " madness " is 70......, and to obtain total weight be 470 (here not remove without semantic participle) by the weight of each unattached participle cumulative.Account for the ratio of total weight according to the weight of unattached participle " madness ", the weight of this unattached participle being transformed in 100 scopes is the first corresponding weight factor 14.9.
Step 103, by the unattached participle in the first description metadata, with the unattached participle in the second description metadata of the video that prestores, carry out near synonym analysis, and second weight factor corresponding according to unattached participle in the first weight factor and the second description metadata, determine video to be matched and the matching degree of the video that prestores.
Concrete, in the present embodiment, the unattached participle comprised in the second description metadata of the video that prestores, and calculate the second weight factor corresponding to each unattached participle, and store unattached participle and the second weight factor in ordering server.
The deterministic process of the unattached participle of the second description metadata, the computation process of the second weight factor, identical with the processing procedure of the first description metadata, thus can compare based on same standard, its process does not repeat them here.
In the present embodiment, set up near synonym table in advance, record the similarity that certain participle is corresponding with multiple participle near synonym table, the similarity of such as " primitive man " and " homo sapiens " is 0.8 etc.When after the unattached participle determining video to be matched and the first weight factor thereof, retrieve each unattached participle with prestore in the unattached participle of video closest near synonym, both the first weight factors, similarity are added up with the product of the second weight factor, mutually as final matching degree.
Further, optionally, according to UNICODE coding, the unattached participle comprised in the first description metadata is sorted, and according to UNICODE coding, the unattached participle comprised in the second description metadata is sorted in advance, the retrieval rate of near synonym coupling can be improved, thus improve the speed of video matching.
This step 103 is set forth as follows for above-mentioned " mad primitive man ":
The unattached participle of " if mad primitive man " and the first weight factor, after UNICODE coding, sort as " primitive man " 21.3, " madness " 14.9, " life " 6.4......
If one prestores the unattached participle of video and the second weight factor, after UNICODE coding, sort as " homo sapiens " 15, " passion " 20, " life " 50......
Set again near synonym table that " primitive man " and " homo sapiens " similarity are as 0.8, " madness " and " passion " similarity are 0.4......
Then the matching degree of two videos is: 21.3 × 15 × 0.8+14.9 × 20 × 0.4+6.4 × 50+...
Step 104, the preset number that output matching degree is the highest prestore video as a result.
Concrete, video to be matched is mated one by one with the video that prestores, calculates matching degree between any two.And matching degree is arranged from high to low, using some videos that prestore the highest for matching degree as Output rusults.These Output rusults just can be recommended user by ordering server.
See Fig. 2, in one embodiment, a kind of video matching device is provided.This device can but do not limit to be applied in ordering server the video of user's program request mated with the video that prestores.This device comprises:
Load module 201, for obtaining the first description metadata of video to be matched.
Analysis module 202, for determining the unattached participle comprised in the first description metadata, and calculates the first weight factor corresponding to each unattached participle.
Memory module 203, for storing the second weight factor of unattached participle in the second description metadata of the video that prestores and correspondence.
Matching module 204, for by the unattached participle in the first description metadata, with the unattached participle in the second description metadata of the video that prestores, carry out near synonym analysis, and second weight factor corresponding according to unattached participle in the first weight factor and described second description metadata, determine video to be matched and the matching degree of the video that prestores.
Output module 205, prestores video as a result for the highest preset number of output matching degree.
In the present embodiment, the unattached participle comprised in second description metadata of analysis module 202 also for the video that prestores, and calculate the second weight factor corresponding to each unattached participle, and memory module 203 is stored in unattached participle and the second weight factor.
Optionally, analysis module 202 can carry out word segmentation processing to each metadata item in the first description metadata, obtains metadata item participle; And the identical participle merged in all metadata items, obtain unattached participle; And according to the default weight of metadata item participle corresponding to each unattached participle, calculate the first weight factor that each unattached participle is corresponding.Analysis module 202, according to identical method, processes the second description metadata.
Optionally, analysis module 202, in processing procedure, can remove in metadata item participle without semantic participle, and according to UNICODE coding the unattached participle comprised in the first description metadata is sorted.In like manner, analysis module 202 will sort to the unattached participle comprised in the second description metadata according to UNICODE coding in advance.
Above-mentioned video matching method and device, obtain the first description metadata of video to be matched, determine the unattached participle wherein comprised, and calculate the first weight factor of each unattached participle.Again by the unattached participle of the unattached participle in the first description metadata with the video that prestores, carry out near synonym analysis, determine matching degree, preset number the highest for matching degree prestored video frequency output as a result, thus can according to the video to be matched of input, carry out mating with the video that prestores and obtain the video similar to it, improve accuracy and speed.
The above embodiment only have expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but therefore can not be interpreted as the restriction to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (10)

1. a video matching method, is characterized in that, described method comprises:
Obtain the first description metadata of video to be matched;
Determine the unattached participle comprised in described first description metadata, and calculate the first weight factor corresponding to each unattached participle;
By the unattached participle in described first description metadata, with the unattached participle in the second description metadata of the video that prestores, carry out near synonym analysis, and second weight factor corresponding according to unattached participle in described first weight factor and described second description metadata, determine the matching degree of described video to be matched and the described video that prestores;
The preset number that output matching degree is the highest prestore video as a result.
2. method according to claim 1, is characterized in that, before the step of the first description metadata obtaining video to be matched, described method also comprises:
Prestore described in pre-determining video the second description metadata in the unattached participle that comprises, and calculate the second weight factor corresponding to each unattached participle, and store described unattached participle and described second weight factor.
3. method according to claim 2, is characterized in that, the described unattached participle determining to comprise in described first description metadata, and calculates the step of the first weight factor corresponding to each unattached participle, comprising:
Word segmentation processing is carried out to each metadata item in described first description metadata, obtains metadata item participle;
Merge the identical participle in all described metadata items, obtain described unattached participle;
According to the default weight of metadata item participle corresponding to each described unattached participle, calculate the first weight factor that each described unattached participle is corresponding;
The described unattached participle determining to comprise in described second description metadata, and calculate the step of the second weight factor corresponding to each unattached participle, comprising:
Word segmentation processing is carried out to each metadata item in described second description metadata, obtains metadata item participle;
Merge the identical participle in all described metadata items, obtain described unattached participle;
According to the default weight of metadata item participle corresponding to each described unattached participle, calculate the second weight factor that each described unattached participle is corresponding.
4. method according to claim 3, it is characterized in that, described, word segmentation processing is carried out to each metadata item in described first description metadata, obtain the step of metadata item participle, and described word segmentation processing is carried out to each metadata item in described second description metadata, obtain, in the step of metadata item participle, comprising:
Remove in metadata item participle without semantic participle.
5. the method according to claim 3 or 4, is characterized in that, described method also comprises:
According to UNICODE coding, the unattached participle comprised in described first description metadata is sorted; And
According to UNICODE coding, the unattached participle comprised in described second description metadata is sorted.
6. a video matching device, is characterized in that, described device comprises:
Load module, for obtaining the first description metadata of video to be matched;
Analysis module, for determining the unattached participle comprised in described first description metadata, and calculates the first weight factor corresponding to each unattached participle;
Memory module, for storing the second weight factor of unattached participle in the second description metadata of the video that prestores and correspondence;
Matching module, for by the unattached participle in described first description metadata, with the unattached participle in the second description metadata of the video that prestores, carry out near synonym analysis, and second weight factor corresponding according to unattached participle in described first weight factor and described second description metadata, determine the matching degree of described video to be matched and the described video that prestores;
Output module, prestores video as a result for the highest preset number of output matching degree.
7. device according to claim 6, is characterized in that, described analysis module, also for the unattached participle that comprises in the second description metadata of the video that prestores described in pre-determining, and calculates the second weight factor corresponding to each unattached participle.
8. device according to claim 7, is characterized in that, described analysis module, for carrying out word segmentation processing to each metadata item in described first description metadata, obtains metadata item participle; And the identical participle merged in all described metadata items, obtain described unattached participle; And according to the default weight of metadata item participle corresponding to each described unattached participle, calculate the first weight factor that each described unattached participle is corresponding;
Described analysis module, also for carrying out word segmentation processing to each metadata item in described second description metadata, obtains metadata item participle; And the identical participle merged in all described metadata items, obtain described unattached participle; And according to the default weight of metadata item participle corresponding to each described unattached participle, calculate the second weight factor that each described unattached participle is corresponding.
9. device according to claim 8, is characterized in that, described analysis module, also for remove in metadata item participle without semantic participle.
10. device according to claim 8 or claim 9, is characterized in that, described analysis module, also for sorting to the unattached participle comprised in described first description metadata according to UNICODE coding; And according to UNICODE coding, the unattached participle comprised in described second description metadata is sorted.
CN201410306847.2A 2014-06-30 2014-06-30 Video matching method and device Active CN105279172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410306847.2A CN105279172B (en) 2014-06-30 2014-06-30 Video matching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410306847.2A CN105279172B (en) 2014-06-30 2014-06-30 Video matching method and device

Publications (2)

Publication Number Publication Date
CN105279172A true CN105279172A (en) 2016-01-27
CN105279172B CN105279172B (en) 2019-07-09

Family

ID=55148195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410306847.2A Active CN105279172B (en) 2014-06-30 2014-06-30 Video matching method and device

Country Status (1)

Country Link
CN (1) CN105279172B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902200A (en) * 2019-02-22 2019-06-18 网宿科技股份有限公司 A kind of method, apparatus and server of video search sequence
CN110072118A (en) * 2018-01-24 2019-07-30 优酷网络技术(北京)有限公司 Video matching method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021857A (en) * 2006-10-20 2007-08-22 鲍东山 Video searching system based on content analysis
CN101021852A (en) * 2006-10-10 2007-08-22 鲍东山 Video search dispatching system based on content
CN101383782A (en) * 2008-10-16 2009-03-11 深圳市迅雷网络技术有限公司 Method and system for acquiring network resource identification
CN103207891A (en) * 2013-02-25 2013-07-17 北京智谷睿拓技术服务有限公司 Multimedia information displaying method and device
CN103491205A (en) * 2013-09-30 2014-01-01 北京奇虎科技有限公司 Related resource address push method and device based on video retrieval
CN103500214A (en) * 2013-09-30 2014-01-08 北京奇虎科技有限公司 Word segmentation information pushing method and device based on video searching
CN103686452A (en) * 2013-12-06 2014-03-26 北京普瑞众合国际科技有限公司 Addition processing method for video associated information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021852A (en) * 2006-10-10 2007-08-22 鲍东山 Video search dispatching system based on content
CN101021857A (en) * 2006-10-20 2007-08-22 鲍东山 Video searching system based on content analysis
CN101383782A (en) * 2008-10-16 2009-03-11 深圳市迅雷网络技术有限公司 Method and system for acquiring network resource identification
CN103207891A (en) * 2013-02-25 2013-07-17 北京智谷睿拓技术服务有限公司 Multimedia information displaying method and device
CN103491205A (en) * 2013-09-30 2014-01-01 北京奇虎科技有限公司 Related resource address push method and device based on video retrieval
CN103500214A (en) * 2013-09-30 2014-01-08 北京奇虎科技有限公司 Word segmentation information pushing method and device based on video searching
CN103686452A (en) * 2013-12-06 2014-03-26 北京普瑞众合国际科技有限公司 Addition processing method for video associated information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110072118A (en) * 2018-01-24 2019-07-30 优酷网络技术(北京)有限公司 Video matching method and device
CN109902200A (en) * 2019-02-22 2019-06-18 网宿科技股份有限公司 A kind of method, apparatus and server of video search sequence

Also Published As

Publication number Publication date
CN105279172B (en) 2019-07-09

Similar Documents

Publication Publication Date Title
US20220035827A1 (en) Tag selection and recommendation to a user of a content hosting service
KR101721338B1 (en) Search engine and implementation method thereof
CN104573054B (en) A kind of information-pushing method and equipment
US11580181B1 (en) Query modification based on non-textual resource context
US9626424B2 (en) Disambiguation and tagging of entities
CN107330752B (en) Method and device for identifying brand words
CN107797982B (en) Method, device and equipment for recognizing text type
US9582486B2 (en) Apparatus and method for classifying and analyzing documents including text
SG194442A1 (en) In-video product annotation with web information mining
CN103136228A (en) Image search method and image search device
US20120191692A1 (en) Semantic matching by content analysis
MX2013005056A (en) Multi-modal approach to search query input.
CN105468790B (en) A kind of comment information search method and device
US20200272674A1 (en) Method and apparatus for recommending entity, electronic device and computer readable medium
CN110210038B (en) Core entity determining method, system, server and computer readable medium thereof
CN104317867B (en) The system that entity cluster is carried out to the Web page picture that search engine returns
US20190082236A1 (en) Determining Representative Content to be Used in Representing a Video
CN106294358A (en) The search method of a kind of information and system
US20120096033A1 (en) Disambiguation of Entities
CN103412880A (en) Method and device for determining implicit associated information between multimedia resources
CN116034401A (en) System and method for retrieving video using natural language descriptions
JP4879775B2 (en) Dictionary creation method
CN110674087A (en) File query method and device and computer readable storage medium
CN111625619B (en) Query omission method, device, computer readable medium and electronic equipment
CN105279172A (en) Video matching method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 516025 No. 1, Shunchang Road, Huinan Industrial Park, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Patentee after: WELLAV TECHNOLOGIES Ltd.

Address before: 516006 Huitai Industrial Zone 63, Zhongkai High-tech Zone, Huizhou City, Guangdong Province

Patentee before: HUIZHOU WELLAV TECHNOLOGIES Co.,Ltd.

CP03 Change of name, title or address