CN112199522B - Interactive implementation method, terminal, server, computer equipment and storage medium - Google Patents
Interactive implementation method, terminal, server, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112199522B CN112199522B CN202010878454.4A CN202010878454A CN112199522B CN 112199522 B CN112199522 B CN 112199522B CN 202010878454 A CN202010878454 A CN 202010878454A CN 112199522 B CN112199522 B CN 112199522B
- Authority
- CN
- China
- Prior art keywords
- page
- library
- characteristic
- feature
- characteristic point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000002452 interceptive effect Effects 0.000 title claims description 38
- 230000003993 interaction Effects 0.000 claims abstract description 26
- 238000000605 extraction Methods 0.000 claims description 18
- 230000008859 change Effects 0.000 claims description 10
- 238000013136 deep learning model Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 14
- 239000011159 matrix material Substances 0.000 description 9
- 238000010276 construction Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013139 quantization Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003071 parasitic effect Effects 0.000 description 2
- 244000035744 Hura crepitans Species 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003306 harvesting Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/41—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/48—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Library & Information Science (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses an interaction implementation method, which is applied to an applet and comprises the following steps: acquiring an original picture of a page; identifying a reference object and obtaining position information of the reference object in an original picture of a page; extracting characteristic points from the original picture of the page and obtaining information of the characteristic points to be compared of the page; and loading the corresponding multimedia file based on the position information of the feature point information to be compared and the reference object in the original picture of the page, and playing the multimedia file. The application also provides a terminal, a server, computer equipment and a storage medium. The application performs real-time content retrieval through the social software light application applet without additionally downloading the APP, and is convenient to use.
Description
Technical Field
The present disclosure relates to the field of multimedia education technologies, and in particular, to an interaction implementation method, a terminal, a server, a computer device, and a storage medium.
Background
The click-reading is an intelligent reading and learning mode realized by utilizing an optical image recognition technology and a digital voice technology, which embodies perfect integration of an electronic multimedia technology and the education industry and realizes the scientific and technological human-oriented concept.
With existing point-and-read devices, it is often necessary to pre-process the book, print or paste specific codes on the book, otherwise the book contents cannot be identified. In addition, due to the limitation of the coding rule, the total coding quantity is limited, and the mode of reading and identifying codes shows obvious limitation for books with more contents.
Disclosure of Invention
The purpose of the application is to provide an interactive implementation method, a terminal, a server, computer equipment and a storage medium, and printed matters such as teaching materials and teaching assistance real-time content retrieval are carried out through a light application applet of social software, so that APP does not need to be additionally downloaded, and the application is convenient to use.
To achieve the above objective, a first aspect of the embodiments of the present application discloses an interaction implementation method, which is applied to a social software light application applet, including:
acquiring an original picture of a page;
identifying a reference object and obtaining position information of the reference object in an original picture of a page;
extracting characteristic points from the original picture of the page and obtaining information of the characteristic points to be compared of the page;
loading and playing a corresponding multimedia file based on the to-be-compared characteristic point information and the position information of the reference object in the original page picture, wherein the to-be-compared characteristic point information is used for searching and comparing with a page characteristic library to obtain a corresponding characteristic point data page, the page characteristic library comprises a plurality of characteristic point data pages, and each characteristic point data page comprises at least one data block; the corresponding multimedia file is obtained by obtaining the data block of the reference object in the corresponding characteristic point data page through the position information of the reference object in the page original picture and searching and comparing the multimedia database, the multimedia file is contained in the multimedia database, and the multimedia file corresponds to the data block.
Optionally, before loading the corresponding multimedia file and playing, the step further includes:
and outputting the characteristic point information to be compared of the page, wherein the characteristic point information to be compared is used for searching and comparing with a page characteristic library of the server to obtain a corresponding characteristic point data page.
Optionally, before loading the corresponding multimedia file and playing, the step further includes:
obtaining an original picture of a cover;
extracting feature points from the original picture of the cover;
acquiring and outputting information of characteristic points to be compared of the cover;
loading a corresponding printed matter sub-library, wherein the characteristic points to be compared of the cover are used for carrying out searching pairing with the cover characteristic sub-library, the content information of each page of the cover characteristic sub-library corresponds to the printed matter sub-library one by one, the page characteristic sub-library comprises a printed matter sub-library, the printed matter sub-library comprises a plurality of characteristic point data pages, and the cover characteristic sub-library is obtained by extracting characteristic points of an original cover page image of a printed matter.
Optionally, before loading the corresponding multimedia file and playing, the method further includes:
and storing a page feature library.
Optionally, the step of storing the page feature library specifically includes:
acquiring an original page image of a printed matter and acquiring a feature descriptor;
Extracting hash values of corresponding feature descriptors and performing inverted indexing;
storing unique identifiers in the original page images containing hash values with the same size in the same position of a hash table in a linked list mode;
and constructing and storing a complete page feature library.
Optionally, the step of obtaining the feature descriptor specifically includes:
extracting characteristic points by using a key point detection algorithm;
carrying out characteristic point direction identification;
and describing the feature points to obtain feature descriptors.
Optionally, loading the corresponding multimedia file and playing the corresponding multimedia file specifically includes:
comparing the page to-be-compared characteristic point information with a local page characteristic library and obtaining a corresponding characteristic point data page;
obtaining a data block pointed by a reference object from the position information of the reference object in the original page picture;
outputting data block information;
and loading the multimedia file corresponding to the data block and playing the multimedia file.
Optionally, the step of comparing the feature point information of the page to be compared with the local page feature library and obtaining the corresponding feature point data page specifically includes:
acquiring feature descriptors corresponding to feature points in the page feature point information to be compared and extracting hash values;
comparing the hash value with the hash value of the characteristic point data page of the local page characteristic library and voting;
According to the scoring result of the voting, N characteristic point data pages of N before the scoring result are selected as candidate results, wherein N is an integer greater than 1;
if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change, the characteristic point data page with the same hash value and the largest value is used as the corresponding characteristic point data page.
Optionally, the step of obtaining the data block pointed by the reference object from the position information of the reference object in the original page picture specifically includes:
establishing a corresponding relation between the page to-be-compared characteristic point information and the characteristic point data page;
obtaining position information of a reference object in a characteristic point data page from position information of the reference object in an original page picture;
and obtaining a corresponding data block according to the position information.
Optionally, the step of identifying the reference object and obtaining the position information of the reference object in the original picture of the page specifically includes:
loading a deep learning model;
inputting the original page picture into the model;
and acquiring a reference object existing in the original picture from the model and acquiring the position information of the reference object in the original picture of the page.
The second aspect of the embodiment of the application discloses an interaction realizing terminal, which comprises:
The page picture acquisition module is used for acquiring an original page picture;
the page identification acquisition module is used for identifying a reference object and acquiring the position information of the reference object in the original page picture;
the page extraction and acquisition module is used for extracting characteristic points from the original page picture and acquiring the information of the characteristic points to be compared of the page;
the loading and playing module is used for loading and playing the corresponding multimedia file based on the position information of the feature point information to be compared and the reference object in the original page picture, wherein the feature point information to be compared of the page is used for searching and comparing with the page feature library to obtain a corresponding feature point data page, the page feature library comprises a plurality of feature point data pages, and each feature point data page comprises at least one data block; the corresponding multimedia file is obtained by obtaining the data block of the reference object in the corresponding characteristic point data page through the position information of the reference object in the page original picture and searching and comparing the multimedia database, the multimedia file is contained in the multimedia database, and the multimedia file corresponds to the data block.
Optionally, the method further comprises:
the page to-be-compared output module is used for outputting page to-be-compared characteristic point information, and the to-be-compared characteristic point information is used for searching and comparing with a page characteristic library of the server to obtain a corresponding characteristic point data page.
Optionally, the method further comprises:
the cover picture acquisition module is used for acquiring a cover original picture;
the cover extraction module is used for extracting characteristic points from the original cover picture;
the cover information obtaining and outputting module is used for obtaining and outputting the information of the characteristic points to be compared of the cover;
the loading sub-library module is used for loading a corresponding printed matter sub-library, wherein the cover feature points to be compared are used for carrying out searching pairing with the cover feature sub-library, the cover feature sub-library corresponds to the printed matter sub-library one by one, the page feature library comprises a printed matter sub-library, the printed matter sub-library comprises a plurality of feature point data pages, and the cover feature sub-library is obtained by extracting feature points of an original cover page image of a printed matter.
Optionally, the method further comprises:
and the terminal page storage module is used for storing the page feature library.
Optionally, the terminal page storage module includes:
the printed matter acquisition and extraction module is used for acquiring an original page image of the printed matter and acquiring a feature descriptor;
the extraction index module is used for extracting the hash value of the corresponding feature descriptor and performing inverted index;
the linked list storing module is used for storing the unique identifier in the original page image containing the hash value with the same size in the same position of the hash table in a linked list mode;
And the construction storage module is used for constructing and storing the completed page feature library.
Optionally, the print acquiring and extracting module includes:
the key point extraction module is used for extracting characteristic points by using a key point detection algorithm;
the direction recognition module is used for recognizing the direction of the characteristic points;
and the descriptor acquisition module is used for describing the feature points to obtain feature descriptors.
Optionally, the loading playing module includes:
the terminal comparison obtaining module is used for comparing the page to-be-compared characteristic point information with a local page characteristic library and obtaining a corresponding characteristic point data page;
a terminal data block obtaining module, which is used for obtaining a data block pointed by a reference object from the position information of the reference object in the original page picture;
a data block output module for outputting data block information;
and the loading and playing sub-module is used for loading the multimedia file corresponding to the data block and playing the multimedia file.
Optionally, the comparison obtaining module includes:
the terminal hash value extraction module is used for acquiring feature descriptors corresponding to feature points in the page to-be-compared feature point information and extracting hash values;
the terminal comparison voting module is used for comparing the hash value of the terminal comparison voting module with the hash value of the characteristic point data page of the local page characteristic library and voting;
The terminal selection module is used for selecting N characteristic point data pages of N before the scoring result as candidate results according to the scoring result of the voting, wherein N is an integer greater than 1;
and the terminal judging module is used for taking the characteristic point data page with the same hash value and the largest number of the characteristic point data pages as the corresponding characteristic point data page if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change.
Optionally, the terminal data block obtaining module includes:
the terminal corresponding relation establishing module is used for establishing the corresponding relation between the page to-be-compared characteristic point information and the characteristic point data page;
a terminal position acquisition module for acquiring the position information of the reference object in the characteristic point data page from the position information of the reference object in the page original picture;
and the terminal data block obtaining sub-module is used for obtaining the corresponding data block according to the position information.
Optionally, the page identification obtaining module includes:
the model loading module is used for loading the deep learning model;
the image input model module is used for inputting the original image of the page into the model;
and the reference object position acquisition information is used for acquiring the reference object existing in the original picture from the model and acquiring the position information of the reference object in the original picture of the page.
A third aspect of the present application discloses a computer device, where the computer device includes a processor and a memory, where at least one instruction, at least one section of program, a code set, or an instruction set is stored in the memory, where the at least one instruction, the at least one section of program, the code set, or the instruction set is loaded and executed by the processor to implement an interaction implementation method as described above.
In a fourth aspect, the present application discloses a computer-readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored in the computer-readable storage medium, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement an interaction implementation method as described above.
The fifth aspect of the embodiment of the application discloses an interaction implementation method, which is applied to a server, and is characterized by comprising the following steps:
storing a multimedia database, wherein the multimedia database comprises a plurality of multimedia files;
the method comprises the steps of receiving information of a data block, wherein the data block is obtained by searching and comparing the information of to-be-compared characteristic points of a page with a page characteristic library to obtain a corresponding characteristic point data page and searching and comparing the corresponding characteristic point data page through the position information of a reference object in an original page picture, and the page characteristic point information is obtained by extracting characteristic points of the original page picture; the page feature library comprises a plurality of feature point data pages, each feature point data page comprises at least one data block, and the multimedia files and the data blocks are in one-to-one correspondence;
Obtaining a corresponding multimedia file;
and outputting the multimedia file.
Optionally, before the step of receiving the information of the data block, the method further includes:
and storing a page feature library.
Optionally, before the step of receiving the information of the data block, the method further includes:
receiving cover to-be-compared characteristic point information, wherein the cover to-be-compared characteristic point information is obtained by extracting characteristic points from original cover pictures;
comparing the cover to-be-compared characteristic points with a cover characteristic sub-library, wherein the cover characteristic sub-library corresponds to the printed matter sub-library one by one, the page characteristic library comprises a printed matter sub-library, and the printed matter sub-library comprises a plurality of characteristic point data pages;
obtaining a corresponding printed matter sub-library;
and outputting the printed matter sub-library.
Optionally, the step of receiving the information of the data block specifically includes:
receiving page feature point information to be compared and position information of a reference object in an original page picture, wherein the page feature point information is obtained by extracting feature points of the original page picture;
comparing the page to-be-compared characteristic point information with a page characteristic library and obtaining a corresponding characteristic point data page;
obtaining a data block pointed by a reference object from the position information of the reference object in the original page picture;
data block information is received.
Optionally, the step of comparing the feature point information of the page to be compared with the page feature library and obtaining the corresponding feature point data page specifically includes:
extracting hash values from feature descriptors corresponding to the feature points extracted from the original page picture;
comparing the hash value with the hash value of the characteristic point data page of the page characteristic library and voting;
according to the scoring result of the voting, N characteristic point data pages of N before the scoring result are selected as candidate results, wherein N is an integer greater than 1;
if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change, the characteristic point data page with the same hash value and the largest value is used as the corresponding characteristic point data page.
Optionally, the step of obtaining the data block pointed by the reference object from the position information of the reference object in the original page picture specifically includes:
establishing a corresponding relation between the characteristic points to be compared of the page and the characteristic point data page;
obtaining the position of a reference object in a characteristic point data page from the position information of the reference object in the original page picture;
and obtaining a corresponding data block.
The sixth aspect of the embodiment of the application discloses an interactive implementation server, which comprises:
a multimedia storage module for storing a multimedia database, wherein the multimedia database comprises a plurality of multimedia files;
The data block receiving module is used for receiving information of a data block, wherein the data block is obtained by searching and comparing the characteristic point information to be compared of a page with a page characteristic library to obtain a corresponding characteristic point data page and searching and comparing the corresponding characteristic point data page with the position information of a reference object in an original page picture, and the page characteristic point information is obtained by extracting characteristic points of the original page picture; the page feature library comprises a plurality of feature point data pages, each feature point data page comprises at least one data block, and the multimedia file and the data blocks are in corresponding relation;
the multimedia obtaining module is used for obtaining the corresponding multimedia file;
and the multimedia output module is used for outputting the multimedia file.
Optionally, the method further comprises:
and the server-side page storage module is used for storing the page feature library.
Optionally, the method further comprises:
the cover information receiving module is used for receiving cover to-be-compared characteristic point information, wherein the cover to-be-compared characteristic point information is obtained by extracting characteristic points from original cover pictures;
the cover comparison module is used for comparing the cover to-be-compared characteristic points with the cover characteristic sub-library, wherein the cover characteristic sub-library corresponds to the printed matter sub-library one by one, the page characteristic sub-library comprises a printed matter sub-library, and the printed matter sub-library comprises a plurality of characteristic point data pages;
The sub-library obtaining module is used for obtaining a corresponding printed matter sub-library;
and the sub-library output module is used for outputting the printed matter sub-library.
Optionally, the data block receiving module includes:
the device comprises a characteristic point and position receiving module, a characteristic point and position determining module and a position determining module, wherein the characteristic point and position receiving module is used for receiving the information of the characteristic point to be compared of the page and the position information of a reference object in an original picture of the page, and the information of the characteristic point of the page is obtained by extracting the characteristic point of the original picture of the page;
the server side comparison obtaining module is used for comparing the page to-be-compared characteristic point information with the page characteristic library and obtaining a corresponding characteristic point data page;
the server-side data block obtaining module is used for obtaining a data block pointed by the reference object from the position information of the reference object in the page original picture;
and the data block receiving module is used for receiving the data block information.
Optionally, the server side comparison obtaining module includes:
the server hash value extraction module is used for extracting hash values from feature descriptors corresponding to the feature points extracted from the original page pictures;
the server side comparison voting module is used for comparing and voting the hash value of the server side comparison voting module with the hash value of the characteristic point data page of the page characteristic library;
the server side selection module is used for selecting N characteristic point data pages in front of the scoring result as candidate results according to the scoring result of the voting, wherein N is an integer greater than 1;
And the server side judging module is used for taking the characteristic point data page with the same hash value and the largest number of the characteristic point data pages as the corresponding characteristic point data page if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change.
Optionally, the server side data block obtaining module includes:
the server-side corresponding relation establishing module is used for establishing the corresponding relation between the characteristic points to be compared of the page and the characteristic point data page;
the server side position acquisition module is used for acquiring the position of the reference object in the characteristic point data page from the position information of the reference object in the page original picture;
and the server-side data block obtaining sub-module is used for obtaining the corresponding data block.
A seventh aspect of the embodiments of the present application discloses a computer device, where the computer device includes a processor and a memory, where at least one instruction, at least one section of program, a code set, or an instruction set is stored in the memory, where the at least one instruction, the at least one section of program, the code set, or the instruction set is loaded and executed by the processor to implement an interactive implementation method as described above.
An eighth aspect of the present application discloses a computer readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored in the computer readable storage medium, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement an interactive implementation method as described above.
After the technical scheme is adopted, compared with the background technology, the application has the following advantages: the applet of the application loads and plays the corresponding multimedia file based on the position information of the feature point information to be compared and the reference object in the original page picture, the APP does not need to be additionally downloaded, the reference object is used for position indication, the corresponding area is automatically identified and matched to obtain the multimedia file, the use is convenient, and special hardware equipment is not needed.
Drawings
FIG. 1 is a schematic view of an application environment of an interactive implementation method of the present application;
fig. 2 is an overall schematic diagram of the terminal of the present application;
FIG. 3 is a flowchart of an interactive implementation method according to a first embodiment of the present application;
fig. 4 is a schematic diagram of feature point extraction of english literature according to the first embodiment of the application;
FIG. 5 is a schematic view of a combination of neighboring feature points selected in the first embodiment of the present application;
fig. 6 is a schematic diagram of a structure of a neural network for reference object recognition according to a first embodiment of the present application;
FIG. 7 is a schematic diagram of a neural network generation feature point descriptor according to a first embodiment of the present application;
FIG. 8 is a schematic diagram of feature point matching according to a first embodiment of the present application;
FIG. 9 is a flowchart showing the step S140 in FIG. 3;
FIG. 10 is a flowchart of the steps further included before step S140 in FIG. 3;
FIG. 11 is a flowchart of an interactive implementation method according to a second embodiment of the present application;
fig. 12 is a flowchart of the steps further included in fig. 11 before step S220;
fig. 13 is a block diagram of an interactive implementation terminal according to a third embodiment of the present application;
FIG. 14 is a block diagram of a computer device according to an embodiment of the present application;
fig. 15 is a block diagram of an interactive implementation terminal according to a fourth embodiment of the present application;
fig. 16 is a block diagram of a computer device according to yet another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The embodiment of the application provides an interaction implementation method, which can be applied to an application environment shown in fig. 1. Referring to fig. 1, an application APP is installed on a terminal 101, where the application APP is, for example, a WeChat, a payment device, a facebook, a Line, or the like; the application APP is "parasitic" with at least one applet, where the applet is, for example, a WeChat applet, a Payment applet, a facebook applet or a Line applet, where the applet uses the deep learning reasoning engine of the application APP, where the application APP has an open interface for applet development, application distribution propagation, running sandbox environment, data and capabilities, where the applet of the application APP can serve as a carrier and portal for content retrieval. The applet server 102 serves the applet, i.e. although the applet is "parasitic" on the application APP, many functions of the applet are implemented by the applet server 102. The APP server 103 is a back-end server of the application program APP, that is, the function of the application program APP is realized through the APP server 103, and the applet can call the APP server 103 or load the functional module of the APP server 103 through the open interface of the application program APP. Here, the server may be a server, a cloud, or the like. The terminal 101 includes, for example, a mobile phone, a tablet computer, and the like.
Referring to fig. 2, a terminal 101 includes a terminal body, a mirror 104, an application APP is installed on the terminal body, the application APP has an applet therein, the terminal body includes a camera, a processor and a memory, the camera includes a front camera and a rear camera and is respectively disposed on front and rear sides of the terminal body, the mirror 104 is installed on the terminal body for the front camera to shoot, the terminal body is installed on a base 105 and has a certain inclination angle, the front camera is started, and the front camera shoots an original image of a page reflected by the mirror 104 to obtain an original image of the page; the front camera and the rear camera and the memory are respectively connected with the processor; and clicking an interested area on the original image of the page by a user through a reference object in real time through an applet, loading and playing a preset multimedia file, and interacting with readers. In addition, in other embodiments of the present application, the terminal 101 may not include the mirror 104, and the original image of the page may be obtained through a camera of the terminal body itself.
Referring to fig. 3, fig. 3 is a flowchart of an interaction implementation method according to a first embodiment of the present application. The interaction implementation method is applied to an applet of an application program APP, and comprises the following steps:
S110, acquiring an original picture of a page;
in this embodiment, the original image of the page may be obtained by the applet calling the camera on the terminal to shoot, or may be obtained from the album of the terminal. In this embodiment, the acquisition mode of the page original picture is not limited.
S120, identifying a reference object and obtaining position information of the reference object in an original picture of the page;
in this embodiment, the applet is provided with a JS plug-in interpreter based on HTML5, and invokes a processor, webGL or WASM band of the terminal through the inference engine to perform data operation on the obtained original picture of the page, so that a deep learning model can be loaded, and the JS inference engine provided by the Tensorflow is used; the inference engine may perform reference recognition, such as finger recognition; as shown in FIG. 6, which is a network diagram of finger recognition, feature point extraction and feature point descriptor generation can be performed through a neural network, as shown in FIG. 7, a semi-dense descriptor is obtained through the network by performing feature point descriptor generation for the neural network, so that the memory overhead of algorithm training can be reduced while the running time of the algorithm is reduced, then the rest of description is obtained through bicubic polynomial interpolation, and then uniform length description is obtained through L2-normals normalized descriptors. And meanwhile, the reasoning process also calculates the position information, such as coordinates, of the reference object in the original picture of the page.
In this embodiment, the reference object is a human hand or finger, a pen-shaped object, an object with a light emitting device at its tip, or the like.
In this embodiment, the step S120 specifically includes the following steps:
loading a deep learning model by the applet;
inputting the original page picture into the model;
and acquiring a reference object existing in the original picture from the model and acquiring the position information of the reference object in the original picture of the page.
In this embodiment, the applet loads the deep learning model through the local storage interface with the application APP, and trains the deep learning model to make the deep learning model recognize the reference object and obtain the position information of the reference object.
In this embodiment, the applet invokes the camera on the terminal to acquire the original page pictures at a certain time interval or in real time, that is, the original page pictures may be multiple, when the position information of the reference object in the original page picture changes, the applet uses the position information of the reference object in the latest original page picture as final position information, covers the previous position information, and plays the multimedia file corresponding to the final position information, if the applet is playing the multimedia file corresponding to the previous position information at this time, the applet stops playing the multimedia file corresponding to the previous position information, and newly plays the multimedia file corresponding to the final position information.
S130, extracting characteristic points from an original picture of the page and obtaining information of the characteristic points to be compared of the page;
in this embodiment, the applet extracts feature points from the original picture of the page, and then processes the feature points to obtain information of the feature points to be compared of the page.
Specifically, step S130 includes:
acquiring an original picture of a page and a feature descriptor;
extracting hash values of corresponding feature descriptors and performing inverted indexing;
storing unique identifiers in the original page images containing hash values with the same size in the same position of a hash table in a linked list mode;
and constructing the to-be-compared characteristic point information of the completed page.
In this embodiment, the applet acquires the original page picture, acquires the feature descriptors of the original page picture, extracts the hash values of the feature descriptors, does not distinguish the hash value attribution of a certain image of the original page picture, performs inverted indexing, and stores the unique identifier in the original page picture containing the hash values with the same size in the same position of the hash table in the form of a linked list, thereby obtaining the feature point information to be compared of the page.
In this embodiment, the extracting the hash value of the corresponding feature descriptor specifically includes:
The eigenvalue calculation of the feature descriptors adopts the cross ratio, has perspective invariance, and for 5 coplanar points A, B, C, D and E, the calculation formula of the cross ratio is as follows:
wherein P represents the triangle area.
After obtaining the feature values, in order to improve the resistance to geometric distortion or feature point omission at the local positions, selecting and fixing 8 feature points in the neighborhood of all feature points, selecting all 7 feature point combinations, namely 7 feature point combinations, selecting 5 point combinations for each 7 point combination, and calculating the cross ratio of the combination, namely 21 combinations, for all 5 point combinations in the neighborhood, quantizing the values of the 21 cross ratios, and then calculating the hash value in the following manner:
wherein d (ri) is a value after the cross ratio quantization, k is a quantization level number, and Hsize is the size of a hash table; in this embodiment, the comparison between the fixed step length and the variable length step length and the value of k confirms the variable length step length, and the quotient of the maximum value and k in the 21 combined values is taken as the step length for quantization, as shown in fig. 5, wherein k=4 is the quantization level with the highest reliability, 21 characteristic values can be combined into one hash value, and 8 hash values in the neighborhood of the central point ensure the resistance to interference.
In this embodiment, the step of obtaining the original image of the page and obtaining the feature descriptor specifically includes the following steps:
a. extracting characteristic points by using a key point detection algorithm; continuously downsampling the original page picture to obtain a series of images with different sizes, further carrying out Gaussian filtering on the images with different scales, subtracting the two images after Gaussian filtering on the same image with similar scales to obtain a Gaussian difference image, and carrying out extremum detection, wherein extremum points meeting curvature conditions are characteristic points; the gaussian difference image D (x, y, σ) operates as follows, G (x, y, σ) is a gaussian filter function, I (x, y) corresponds to the original image, and L (x, y, σ) represents the gaussian filtered image of the scale σ:
D(x,y,σ)=(G(x,y,σ(s+1))-G(x,y,σ(s)))*I(x,y)
=L(x,y,σ(s+1))-L(x,y,σ(s))
b. performing feature point direction identification based on histogram statistics; after finishing gradient calculation of the feature points, using the histogram to count the gradient and the direction of pixels in the neighborhood; the gradient histogram divides the direction range of 0-360 degrees into 18 bins (bins), with 20 degrees per bin; the peak direction of the histogram represents the main direction of the feature point. L is the scale space value of the key point, and the calculation formulas of the gradient m and the direction theta of each pixel point are as follows:
θ(x,y)=tan-1((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
c. and describing the feature points to obtain feature descriptors.
For a printed matter mainly comprising Chinese and English characters, as shown in fig. 4, characters (such as English) taking words as basic units are different in length, so that the positions of the words in images of the photographed printed matter text are different from each other, as shown in (a) input image in fig. 4, the positions of the words in different images are actually greatly different; the Chinese characters are used as square characters, the characters do not have the characteristic of different word lengths in English characters, the square characters have approximately the same distance, the positions of the central points of different characters are almost the same, and the distinction of the positions of the local central points is difficult to achieve like English; the fixed window can be utilized to project the same text line, find a gap area with larger interval between characters, take the center of the gap area and the mass center of a character continuous area divided into a plurality of sections by the gap as characteristic points, if the text line does not have the gap area with larger interval, equally divide the text line into 3 sections, take the equally divided points as characteristic points, as shown in (b) feature points in fig. 4, so as to obtain a feature descriptor.
S140, loading and playing a corresponding multimedia file based on the position information of the feature point information to be compared and the reference object in the original picture of the page, wherein the feature point information to be compared of the page is used for searching and comparing with a page feature library to obtain a corresponding feature point data page, the page feature library comprises a plurality of feature point data pages, and each feature point data page comprises at least one data block; the corresponding multimedia file is obtained by obtaining the data block of the reference object in the corresponding characteristic point data page through the position information of the reference object in the page original picture and searching and comparing the multimedia database, the multimedia file is contained in the multimedia database, and the multimedia file corresponds to the data block.
In this embodiment, the server stores a page feature library, which is obtained by extracting and processing feature points of an original page image of a printed matter, and the construction of the specific page feature library is similar to the construction method of the feature point information to be compared of the page, which is not described herein. In this embodiment, the page feature library includes a plurality of feature point data pages, each feature point data page including at least one data block; the corresponding multimedia file is obtained by obtaining the data block of the reference object in the corresponding characteristic point data page through the position information of the reference object in the page original picture and searching and comparing the multimedia database, the multimedia file is contained in the multimedia database, the multimedia file corresponds to the data block, and the multimedia database is stored at the server. In this embodiment, the multimedia files are in one-to-one correspondence with the data blocks, but the application is not limited thereto, and in other embodiments of the application, one multimedia file may correspond to a plurality of data blocks, and a corresponding multimedia file may be found from any one of the plurality of data blocks.
In this embodiment, when the server stores the page feature library, the applet loads the corresponding print sub-library, and then, the applet obtains the data block of the reference object in the feature point data page of the corresponding print sub-library via the position information of the reference object in the original picture of the page. The page feature library comprises a printed matter sub-library, and the printed matter sub-library comprises a plurality of feature point data pages, for example, the printed matter sub-library is a set of feature point data pages of all pages of a book. In addition, in other embodiments of the present application, the applet transmits the feature point information to be compared and the position information of the reference object in the original page picture to the server for processing, the server obtains the data block of the reference object in the corresponding feature point data page through the position information of the reference object in the original page picture, the server stores the multimedia database, then obtains the corresponding multimedia file through the data block, and then the server transmits the corresponding multimedia file to the applet, and the applet loads and plays the corresponding multimedia file.
In the embodiment, the light application applet of the social software application program APP can play the corresponding multimedia file based on the position information of the reference object in the original picture of the page and the information of the feature points to be compared of the page, so that the application program APP for implementing the method does not need to be downloaded separately, and the use of a user is facilitated; in addition, no additional hardware equipment is needed, and the cost is low.
Specifically, referring to fig. 10, in this embodiment, before loading and playing the corresponding multimedia file, the steps further include:
s151, obtaining an original picture of the cover;
in this embodiment, the original image of the cover is obtained by the applet calling the camera on the terminal, or may be obtained from the album of the terminal, or obtained by other means. The applet can prompt the user to turn the printed matter to the cover by sending prompt information, so that the applet can call the camera to shoot to obtain the original picture of the cover.
S152, extracting characteristic points from the original picture of the cover;
s153, obtaining and outputting cover to-be-compared characteristic point information;
in this embodiment, the steps of extracting the feature points and obtaining the feature point information to be compared of the cover are the same as the previous steps, and the applet outputs the feature point information to be compared of the cover to the server.
And S154, loading a corresponding printed matter sub-library, wherein the characteristic points to be compared of the cover are used for carrying out searching pairing with the cover characteristic sub-library, the content information of each page of the cover characteristic sub-library corresponds to the printed matter sub-library one by one, the page characteristic library comprises a printed matter sub-library, the printed matter sub-library comprises a plurality of characteristic point data pages, and the cover characteristic sub-library is obtained by extracting the characteristic points of an original cover page image of a printed matter.
In this embodiment, the server side further stores a cover feature sub-library, where the cover feature sub-library is obtained by extracting feature points of an original cover image of a printed matter and performing processing, the cover feature sub-library includes feature point cover pages, each feature point cover page corresponds to one printed matter sub-library, the page feature library includes a printed matter sub-library, the printed matter sub-library includes a plurality of feature point data pages, where each printed matter sub-library corresponds to a set of all feature point data pages of a book, each feature point data page includes a set of feature point data of a page, the server side compares feature point information to be compared with the cover feature sub-library to obtain a corresponding feature point cover page, and further obtains a corresponding printed matter sub-library, at this time, the server side already knows the printed matter sub-library corresponding to the feature point information to be compared with the cover, and then can output the printed matter sub-library to the applet for loading, the applet after loading the printed matter sub-library stores the applet, and the page feature point information to be compared with the printed matter sub-library can perform processing in the applet. In addition, in other embodiments of the present application, the applet may also output the feature point information to be compared of the page to the server, where the feature point information to be compared of the page and the printed matter sub-library of the page feature library are searched and compared, and may be processed in the server.
In this embodiment, the cover feature sub-library may be included in the page feature library, where the page feature library includes a cover feature sub-library and a printed matter sub-library. In addition, in other embodiments of the present application, the cover feature sub-library may not be included in the page feature library, and at this time, the feature point cover pages in the cover feature sub-library are in one-to-one correspondence with the printed matter sub-libraries in the page feature library.
In addition, in other embodiments of the present application, steps S151-S154 may not be included, where the applet may prompt the user to input related information of the printed matter, such as a name of a book, a unique number, etc., and then search the input information into a pre-stored list, and find corresponding text information from the list, where, since the content in each table in the list corresponds to a printed matter sub-library of the page feature library, the corresponding printed matter sub-library may be found and loaded into the applet, where, the list may be stored in the applet or may be stored at the server.
In this embodiment, the page feature library is stored in the server, and after the server determines the printed matter sub-library corresponding to the feature point information to be compared with the page, the applet stores the corresponding printed matter sub-library. In addition, in other embodiments of the present application, the page feature library may also be stored directly in the applet, thus eliminating the need to load the print sub-library. In this case, the applet store page feature library specifically includes: acquiring an original page image of a printed matter and acquiring a feature descriptor; extracting hash values of corresponding feature descriptors and performing inverted indexing; storing unique identifiers in the original page images containing hash values with the same size in the same position of a hash table in a linked list mode; and constructing and storing a complete page feature library. The step of obtaining the original page image of the printed matter and obtaining the feature descriptor may refer to the step of obtaining the original page image and obtaining the feature descriptor, which is not described herein.
In this embodiment, after the applet loads the printed matter sub-library, please refer to fig. 9, and step S140 specifically includes:
s141, comparing the page to-be-compared characteristic point information with a local page characteristic library and obtaining a corresponding characteristic point data page;
step S141 is specifically implemented by the following method: acquiring feature descriptors corresponding to feature points in the page feature point information to be compared and extracting hash values; comparing the hash value with the hash value of the feature point data page stored in the page feature library and voting; according to the scoring result of the voting, N characteristic point data pages of N before the scoring result are selected as candidate results, wherein N is an integer greater than 1; specifically, a linked list storing unique identifiers is read from a page feature library, namely a hash table one by one according to the hash values, original page images containing the hash values are counted, all hash values extracted from the page images to be searched are traversed to vote, and N feature point data pages of N before the voting result are selected as candidate results according to the voting result; if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change, the characteristic point data page with the same hash value and the largest value is used as the search result.
In this embodiment, the local page feature library is a print sub-library loaded onto the applet. However, the application is not limited thereto, and when the applet itself stores the page feature library, the local page feature library is the page feature library stored in the applet.
S142, obtaining a data block pointed by the reference object from the position information of the reference object in the original page picture;
s143, outputting data block information;
in this embodiment, please refer to fig. 8 in combination, because the feature point information to be compared of the page corresponds to the feature point data page, the position information of the reference object in the feature point data page can be obtained from the position information of the reference object in the original picture of the page, and further the data block pointed by the reference object is obtained, and the applet outputs the data block information to the server.
In this embodiment, please continue to refer to fig. 8, step S142 specifically includes:
establishing a corresponding relation between the page to-be-compared characteristic point information and the characteristic point data page;
in this embodiment, the step S141 is used to obtain the feature point data page corresponding to the feature point information to be compared, so that a correspondence between the feature point information to be compared and the feature point data page can be established, where the correspondence may be a correspondence between coordinates.
Obtaining position information of a reference object in a characteristic point data page from position information of the reference object in an original page picture;
specifically, after extracting and purifying descriptors of the feature points to be compared of the page and descriptors of the feature point data page, a feature point matching relation diagram shown in fig. 8 is obtained, meanwhile, the reasoning process also calculates coordinates of a reference object in the view field of the camera, a homography matrix of the test set and the sample is calculated by selecting points close to the reference object, and position information of the reference object (triangle in the schematic diagram) in the sample space in the original picture of the page is calculated by the homography matrix.
The image points p1 (x 1, y 1) and p2 (x 2, y 2) on the two images are a matched pair of points, and the homography matrix is H, and then the two images are
The multiplication of the matrix is spread to obtain
The above equation can be transformed to the form ax=0 with ease of solution, as follows: the left and right sides of the first and second equations are multiplied by the left and right sides of the third equation at the same time to obtain
x 2 (H 31 x 1 +H 32 y 1 +H 33 )=H 11 x 1 +H 12 y 1 +H 13
y 2 (H 31 x 1 +H 32 y 1 +H 33 )=H 21 x 1 +H 22 y 1 +H 23
The right side of the equation is changed to 0
x 2 (H 31 x 1 +H 32 y 1 +H 33 )-H 11 x 1 +H 12 y 1 +H 13 =0
y 2 (H 31 x 1 +H 32 y 1 +H 33 )-H 21 x 1 +H 22 y 1 +H 23 =0
The above equation is rewritten as a vector product, let
h=(H 11 ,H 12 ,H 13 ,H 21 ,H 22 ,H 23 ,H 31 ,H 32 ,1) T The homography matrix H is a homogeneous matrix, and the last element of the homography matrix H can be normalized to 1. The two above formulas can be rewritten as
a x h=0
a y h=0
a x =(-x 1 ,-y 1 ,0,0,0,x 2 x 1 ,x 2 y 1 ,x 2 ) T ,a y =(0,0,0,-x 1 ,-y 1 ,-1,y 2 x 1 ,y 2 y 1 ,y 2 ) T Is a pair of matched point pairsThe equation above can be obtained, and the homography matrix H of the two images can be obtained with 8 unknowns, i.e., a minimum of 4 matched pairs of points (any 3 points are not collinear). In general, however, the matching point pairs of the image are more than 4 pairs, and the following equation can be obtained by setting the point pairs to be n pairs
Ah=0, wherein
The coordinates of the reference object position in the original picture of the page can be calculated by using the matrix, and then the position of the reference object in the characteristic point data page is obtained.
And obtaining a corresponding data block according to the position information.
Since the position information of the reference object in the original picture of the page is already known, the position information of the reference object in the characteristic point data page is obtained through the position information in the original picture, and since the position information in the characteristic point data page is contained in the data block, the corresponding data block is obtained.
S144, loading the multimedia file corresponding to the data block and playing.
In this embodiment, after the server obtains the data block information, the server sends the corresponding multimedia file to the applet because the data block corresponds to the multimedia file in the media library, and the applet loads and plays the corresponding multimedia file.
Fig. 11 is a flowchart of an interaction implementation method according to a second embodiment of the present application. The interaction implementation method is applied to the server side of the applet, and corresponds to the interaction implementation method of the first embodiment, and the undescribed part of the embodiment is referred to the first embodiment. Referring to fig. 11, the interaction implementation method includes the following steps:
s210, storing a multimedia database, wherein the multimedia database comprises a plurality of multimedia files;
in this embodiment, since the multimedia database is relatively large, the multimedia database cannot be stored locally in the applet, and the multimedia database is stored in a server, for example, a server, a cloud, etc. In this embodiment, the multimedia database includes a plurality of multimedia files, where the multimedia files are audio files, video files, or a combination of the two.
S220, receiving information of a data block, wherein the data block is obtained by searching and comparing the characteristic point information to be compared of a page with a page characteristic library to obtain a corresponding characteristic point data page and searching and comparing the corresponding characteristic point data page with the position information of a reference object in an original page picture, and the page characteristic point information is obtained by extracting characteristic points of the original page picture; the page feature library comprises a plurality of feature point data pages, each feature point data page comprises at least one data block, and the multimedia file and the data blocks are in corresponding relation;
In this embodiment, the page feature library is stored in the server, the server receives the cover feature point information to be compared obtained by the applet, the server compares the cover feature point information to be compared with the cover feature sub-library, so as to obtain corresponding printed matter sub-library information, then the corresponding printed matter sub-library is output to the applet, the applet loads the corresponding printed matter sub-library, the applet then retrieves and compares the page feature point information to be compared with the printed matter sub-library, so as to obtain a feature point data page corresponding to the page feature point information, the applet then obtains the position information of the reference object in the feature point data page through the position information of the reference object in the original picture of the page, so as to obtain a corresponding data block, and then the applet sends the information of the corresponding data block to the server, and the server receives the information of the corresponding data block. In addition, in other embodiments of the present application, after the server obtains the corresponding sub-library information of the printed matter, the server obtains the position information of the feature point to be compared and the reference object in the original image of the page, the server performs search and comparison on the feature point information to be compared of the page and the sub-library of the printed matter, so as to obtain a feature point data page corresponding to the feature point information to be compared of the page, and then the server obtains the position information of the reference object in the feature point data page through the position information of the reference object in the original image of the page, so as to obtain a corresponding data block. In addition, in other embodiments of the present application, the server may directly obtain related information of the printed matter input by the user, such as a title, a unique number, etc., and then search the input information into a pre-stored list, and find corresponding text information from the list.
S230, obtaining a corresponding multimedia file;
in this embodiment, after receiving the corresponding data block file, the server obtains the corresponding multimedia file through the data block, where the multimedia file corresponds to the data block.
S240, outputting the multimedia file.
In this embodiment, after the server obtains the corresponding multimedia file, the server outputs the multimedia file to the applet, and then the applet loads and plays the multimedia file.
In this embodiment, referring to fig. 12, the method further includes the following steps before step S220:
s251, receiving cover to-be-compared characteristic point information, wherein the cover to-be-compared characteristic point information is obtained by extracting characteristic points from original pictures of the cover;
in this embodiment, feature point information to be compared of the cover is obtained by extracting feature points from an original image of the cover by an applet, and then outputting the feature point information to a server.
S252, comparing the cover to-be-compared characteristic points with a cover characteristic sub-library, wherein the cover characteristic sub-library corresponds to the printed matter sub-library one by one, the page characteristic library comprises a printed matter sub-library, and the printed matter sub-library comprises a plurality of characteristic point data pages;
s253, obtaining a corresponding printed matter sub-library;
in this embodiment, the server side further stores a cover feature sub-library, where the cover feature sub-library is obtained by extracting and processing feature points of an original cover image of a printed matter, the cover feature sub-library includes feature point cover pages, each feature point cover page corresponds to one printed matter sub-library, the page feature library includes a printed matter sub-library, the printed matter sub-library includes a plurality of feature point data pages, where each printed matter sub-library corresponds to a set of all feature point data pages of a book, each feature point data page includes a set of feature point data of a page, and the server side obtains a corresponding feature point cover page by comparing feature point information to be compared with the cover feature sub-library, so as to obtain a corresponding printed matter sub-library, at this time, the server side already knows the printed matter sub-library corresponding to the feature point information to be compared with the cover, and can output the printed matter sub-library to the applet for loading, at this time, the page to be compared with the printed matter sub-library of the feature point information can be processed in the applet. In addition, in other embodiments of the present application, the applet may also output the feature point information to be compared of the page to the server, where the feature point information to be compared of the page and the printed matter sub-library of the page feature library are searched and compared, and may be processed in the server.
In this embodiment, the cover feature sub-library may be included in the page feature library, where the page feature library includes a cover feature sub-library and a printed matter sub-library. In addition, in other embodiments of the present application, the cover feature sub-library may not be included in the page feature library, and at this time, the feature point cover pages in the cover feature sub-library are in one-to-one correspondence with the printed matter sub-libraries in the page feature library.
S254: and outputting the printed matter sub-library.
In this embodiment, the server outputs the sub-library of printed matter to the applet, so that the applet loads the corresponding sub-library of printed matter.
In this embodiment, before step S220, the method further includes: and storing a page feature library. The construction and storage of the page feature library are described in the foregoing, and are not repeated here.
In addition, in other embodiments of the present application, the data block is obtained at the server side, not in the applet, and step S220 specifically includes:
receiving page feature point information to be compared and position information of a reference object in an original page picture, wherein the page feature point information is obtained by extracting feature points of the original page picture;
comparing the page to-be-compared characteristic point information with a page characteristic library and obtaining a corresponding characteristic point data page;
Obtaining a data block pointed by a reference object from the position information of the reference object in the original page picture;
data block information is received.
In other embodiments of the present application, the step of comparing the feature point information to be compared of the page with the page feature library and obtaining the corresponding feature point data page specifically includes:
extracting hash values from feature descriptors corresponding to the feature points extracted from the original page picture;
comparing the hash value with the hash value of the characteristic point data page of the page characteristic library and voting;
according to the scoring result of the voting, N characteristic point data pages of N before the scoring result are selected as candidate results, wherein N is an integer greater than 1;
if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change, the characteristic point data page with the same hash value and the largest value is used as the corresponding characteristic point data page.
In other embodiments of the present application, the step of obtaining the data block pointed by the reference object from the position information of the reference object in the page original picture specifically includes:
establishing a corresponding relation between the characteristic points to be compared of the page and the characteristic point data page;
obtaining the position of a reference object in a characteristic point data page from the position information of the reference object in the original page picture;
And obtaining a corresponding data block.
Fig. 13 is a block diagram of an interaction implementation terminal according to a third embodiment of the present application, where the interaction implementation terminal is configured to implement the interaction implementation method of the first embodiment, and a part of the description of this embodiment is referred to the first embodiment. Referring to fig. 13, the interaction implementation terminal includes:
a page picture obtaining module 301, configured to obtain an original page picture;
a page identification obtaining module 302, configured to identify a reference object and obtain position information of the reference object in an original page picture;
the page extraction obtaining module 303 is configured to extract feature points from the original page picture and obtain feature point information to be compared of the page;
the loading and playing module 304 is configured to load and play a corresponding multimedia file based on position information of the feature point information to be compared and the reference object in the original page picture, where the feature point information to be compared of the page is used to search and compare with a page feature library to obtain a corresponding feature point data page, the page feature library includes a plurality of feature point data pages, and each feature point data page includes at least one data block; the corresponding multimedia file is obtained by obtaining the data block of the reference object in the corresponding characteristic point data page through the position information of the reference object in the page original picture and searching and comparing the multimedia database, the multimedia file is contained in the multimedia database, and the multimedia file corresponds to the data block.
In this embodiment, the interactive implementation terminal further includes a page to-be-compared output module, configured to output information of to-be-compared feature points of the page, where the information of to-be-compared feature points is used to search and compare with a page feature library of the server to obtain a corresponding feature point data page.
In this embodiment, the interactive implementation terminal further includes a cover image acquisition module, configured to acquire an original cover image;
the cover extraction module is used for extracting characteristic points from the original cover picture;
the cover information obtaining and outputting module is used for obtaining and outputting the information of the characteristic points to be compared of the cover;
the loading sub-library module is used for loading a corresponding printed matter sub-library, wherein the cover feature points to be compared are used for carrying out searching pairing with the cover feature sub-library, the cover feature sub-library corresponds to the printed matter sub-library one by one, the page feature library comprises a printed matter sub-library, the printed matter sub-library comprises a plurality of feature point data pages, and the cover feature sub-library is obtained by extracting feature points of an original cover page image of a printed matter.
In this embodiment, the loading and playing module 304 includes:
the terminal comparison obtaining module is used for comparing the page to-be-compared characteristic point information with a local page characteristic library and obtaining a corresponding characteristic point data page;
A terminal data block obtaining module, which is used for obtaining a data block pointed by a reference object from the position information of the reference object in the original page picture;
a data block output module for outputting data block information;
and the loading and playing sub-module is used for loading the multimedia file corresponding to the data block and playing the multimedia file.
In this embodiment, the comparison obtaining module includes:
the terminal hash value extraction module is used for acquiring feature descriptors corresponding to feature points in the page to-be-compared feature point information and extracting hash values;
the terminal comparison voting module is used for comparing the hash value of the terminal comparison voting module with the hash value of the characteristic point data page of the local page characteristic library and voting;
the terminal selection module is used for selecting N characteristic point data pages of N before the scoring result as candidate results according to the scoring result of the voting, wherein N is an integer greater than 1;
and the terminal judging module is used for taking the characteristic point data page with the same hash value and the largest number of the characteristic point data pages as the corresponding characteristic point data page if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change.
In this embodiment, the terminal data block obtaining module includes:
The terminal corresponding relation establishing module is used for establishing the corresponding relation between the page to-be-compared characteristic point information and the characteristic point data page;
a terminal position acquisition module for acquiring the position information of the reference object in the characteristic point data page from the position information of the reference object in the page original picture;
and the terminal data block obtaining sub-module is used for obtaining the corresponding data block according to the position information.
In this embodiment, the page identification obtaining module 302 includes:
the model loading module is used for loading the deep learning model;
the image input model module is used for inputting the original image of the page into the model;
and the reference object position acquisition information is used for acquiring the reference object existing in the original picture from the model and acquiring the position information of the reference object in the original picture of the page.
In other embodiments of the present application, the interactive implementation terminal further includes a terminal page storage module, configured to store a page feature library. The terminal page storage module comprises:
the printed matter acquisition and extraction module is used for acquiring an original page image of the printed matter and acquiring a feature descriptor;
the extraction index module is used for extracting the hash value of the corresponding feature descriptor and performing inverted index;
The linked list storing module is used for storing the unique identifier in the original page image containing the hash value with the same size in the same position of the hash table in a linked list mode;
and the construction storage module is used for constructing and storing the completed page feature library.
In other embodiments of the present application, the print acquisition extraction module includes:
the key point extraction module is used for extracting characteristic points by using a key point detection algorithm;
the direction recognition module is used for recognizing the direction of the characteristic points;
and the descriptor acquisition module is used for describing the feature points to obtain feature descriptors.
The embodiment of the present application further provides a computer device, referring to fig. 14, the computer device includes a processor 311 and a memory 312, where at least one instruction, at least one section of program, a code set, or an instruction set is stored in the memory 312, and the at least one instruction, the at least one section of program, the code set, or the instruction set is loaded and executed by the processor 311 to implement an interaction implementation method as described above.
The embodiment of the application further provides a computer readable storage medium, in which at least one instruction, at least one section of program, a code set or an instruction set is stored, where the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by a processor to implement an interactive implementation method as described above.
Fig. 15 is a block diagram of an interaction implementation server provided in the fourth embodiment of the present application, where the interaction implementation server is configured to implement the interaction implementation method of the second embodiment, and a part of the description of this embodiment is referred to the second embodiment. In this embodiment, the interactive implementation server includes:
a multimedia storage module 401 for storing a multimedia database, wherein the multimedia database contains a plurality of multimedia files;
the data block receiving module 402 is configured to receive information of a data block, where the data block is obtained by searching and comparing feature point information to be compared of a page with a page feature library to obtain a corresponding feature point data page, and searching and comparing the corresponding feature point data page with position information of a reference object in an original image of the page, and the page feature point information is obtained by extracting feature points of the original image of the page; the page feature library comprises a plurality of feature point data pages, each feature point data page comprises at least one data block, and the multimedia file and the data blocks are in corresponding relation;
a multimedia obtaining module 403, configured to obtain a corresponding multimedia file;
a multimedia output module 404 for outputting multimedia files.
In this embodiment, the interactive implementation server further includes: and the server-side page storage module is used for storing the page feature library.
In this embodiment, the interactive implementation server further includes:
the cover information receiving module is used for receiving cover to-be-compared characteristic point information, wherein the cover to-be-compared characteristic point information is obtained by extracting characteristic points from original cover pictures;
the cover comparison module is used for comparing the cover to-be-compared characteristic points with the cover characteristic sub-library, wherein the cover characteristic sub-library corresponds to the printed matter sub-library one by one, the page characteristic sub-library comprises a printed matter sub-library, and the printed matter sub-library comprises a plurality of characteristic point data pages;
the sub-library obtaining module is used for obtaining a corresponding printed matter sub-library;
and the sub-library output module is used for outputting the printed matter sub-library.
In other embodiments of the present application, the data block harvesting module 402 includes:
the device comprises a characteristic point and position receiving module, a characteristic point and position determining module and a position determining module, wherein the characteristic point and position receiving module is used for receiving the information of the characteristic point to be compared of the page and the position information of a reference object in an original picture of the page, and the information of the characteristic point of the page is obtained by extracting the characteristic point of the original picture of the page;
the server side comparison obtaining module is used for comparing the page to-be-compared characteristic point information with the page characteristic library and obtaining a corresponding characteristic point data page;
The server-side data block obtaining module is used for obtaining a data block pointed by the reference object from the position information of the reference object in the page original picture;
and the data block receiving module is used for receiving the data block information.
In other embodiments of the present application, the server side comparison obtaining module includes:
the server hash value extraction module is used for extracting hash values from feature descriptors corresponding to the feature points extracted from the original page pictures;
the server side comparison voting module is used for comparing and voting the hash value of the server side comparison voting module with the hash value of the characteristic point data page of the page characteristic library;
the server side selection module is used for selecting N characteristic point data pages in front of the scoring result as candidate results according to the scoring result of the voting, wherein N is an integer greater than 1;
and the server side judging module is used for taking the characteristic point data page with the same hash value and the largest number of the characteristic point data pages as the corresponding characteristic point data page if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change.
In other embodiments of the present application, the server-side data block obtaining module includes:
the server-side corresponding relation establishing module is used for establishing the corresponding relation between the characteristic points to be compared of the page and the characteristic point data page;
The server side position acquisition module is used for acquiring the position of the reference object in the characteristic point data page from the position information of the reference object in the page original picture;
and the server-side data block obtaining sub-module is used for obtaining the corresponding data block.
The embodiment of the present application further provides a computer device, referring to fig. 16, the computer device includes a processor 411 and a memory 412, where at least one instruction, at least one section of program, a code set, or an instruction set is stored in the memory 412, and the at least one instruction, the at least one section of program, the code set, or the instruction set is loaded and executed by the processor 411 to implement an interaction implementation method as described above.
The embodiment of the application further provides a computer readable storage medium, in which at least one instruction, at least one section of program, a code set or an instruction set is stored, where the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by a processor to implement an interactive implementation method as described above.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (22)
1. An interactive implementation method applied to an applet is characterized by comprising the following steps:
acquiring an original picture of a page;
identifying a reference object and obtaining the position information of the reference object in the original picture of the page;
extracting characteristic points from the original page picture and obtaining information of the characteristic points to be compared of the page;
loading and playing a corresponding multimedia file based on the to-be-compared characteristic point information and the position information of the reference object in the original page picture, wherein the to-be-compared characteristic point information is used for searching and comparing with a page characteristic library to obtain a corresponding characteristic point data page, the page characteristic library comprises a plurality of characteristic point data pages, and each characteristic point data page comprises at least one data block; the corresponding multimedia file is obtained by obtaining a data block of the reference object in the corresponding characteristic point data page through the position information of the reference object in the page original picture and searching and comparing a multimedia database, the multimedia file is contained in the multimedia database, and the multimedia file corresponds to the data block.
2. The interactive implementation method according to claim 1, wherein before loading and playing the corresponding multimedia file, the steps further comprise:
And outputting the characteristic point information to be compared of the page, wherein the characteristic point information to be compared is used for searching and comparing with a page characteristic library of the server to obtain a corresponding characteristic point data page.
3. The interactive implementation method according to claim 1, wherein before loading and playing the corresponding multimedia file, the steps further comprise:
obtaining an original picture of a cover;
extracting feature points from the original picture of the cover;
acquiring and outputting information of characteristic points to be compared of the cover;
loading a corresponding printed matter sub-library, wherein the characteristic points to be compared of the cover are used for carrying out searching pairing with the cover characteristic sub-library, the content information of each page of the cover characteristic sub-library corresponds to the printed matter sub-library one by one, the page characteristic sub-library comprises a printed matter sub-library, the printed matter sub-library comprises a plurality of characteristic point data pages, and the cover characteristic sub-library is obtained by extracting characteristic points of an original cover page image of a printed matter.
4. The interactive implementation method according to claim 1, further comprising, before loading and playing the corresponding multimedia file:
and storing a page feature library.
5. The interactive implementation method according to claim 4, wherein the step of storing the page feature library specifically comprises:
Acquiring an original page image of a printed matter and acquiring a feature descriptor;
extracting hash values of corresponding feature descriptors and performing inverted indexing;
storing unique identifiers in the original page images containing hash values with the same size in the same position of a hash table in a linked list mode;
and constructing and storing a complete page feature library.
6. The method for implementing interaction according to claim 5, wherein the step of obtaining the feature descriptors specifically comprises:
extracting characteristic points by using a key point detection algorithm;
carrying out characteristic point direction identification;
and describing the feature points to obtain feature descriptors.
7. The interactive implementation method according to claim 3 or 4, wherein the loading and playing the corresponding multimedia file comprises:
comparing the page to-be-compared characteristic point information with a local page characteristic library and obtaining a corresponding characteristic point data page;
obtaining a data block pointed by a reference object from the position information of the reference object in the original page picture;
outputting data block information;
and loading the multimedia file corresponding to the data block and playing the multimedia file.
8. The interactive implementation method of claim 7, wherein the step of comparing the feature point information of the page to be compared with the local page feature library and obtaining the corresponding feature point data page specifically comprises:
Acquiring feature descriptors corresponding to feature points in the page feature point information to be compared and extracting hash values;
comparing the hash value with the hash value of the characteristic point data page of the local page characteristic library and voting;
according to the scoring result of the voting, N characteristic point data pages of N before the scoring result are selected as candidate results, wherein N is an integer greater than 1;
if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change, the characteristic point data page with the same hash value and the largest value is used as the corresponding characteristic point data page.
9. The interactive implementation method according to claim 7, wherein the step of obtaining the data block pointed by the reference object from the position information of the reference object in the original picture of the page specifically includes:
establishing a corresponding relation between the page to-be-compared characteristic point information and the characteristic point data page;
obtaining position information of a reference object in a characteristic point data page from position information of the reference object in an original page picture;
and obtaining a corresponding data block according to the position information.
10. The method for implementing interaction according to any one of claims 1 to 6, wherein the step of identifying the reference object and obtaining the position information of the reference object in the original picture of the page specifically includes:
Loading a deep learning model;
inputting the original page picture into the model;
and acquiring a reference object existing in the original picture from the model and acquiring the position information of the reference object in the original picture of the page.
11. An interactive implementation terminal, comprising:
the page picture acquisition module is used for acquiring an original page picture;
the page identification obtaining module is used for identifying a reference object and obtaining the position information of the reference object in the original page picture;
the page extraction and acquisition module is used for extracting characteristic points from the original page picture and acquiring the information of the characteristic points to be compared of the page;
the loading and playing module is used for loading a corresponding multimedia file based on the to-be-compared characteristic point information and the position information of the reference object in the original page picture and playing the multimedia file through a small program, wherein the to-be-compared characteristic point information of the page is used for searching and comparing with a page characteristic library to obtain a corresponding characteristic point data page, the page characteristic library comprises a plurality of characteristic point data pages, and each characteristic point data page comprises at least one data block; the corresponding multimedia file is obtained by the small program obtaining the data block of the reference object in the corresponding characteristic point data page through the position information of the reference object in the page original picture and carrying out searching comparison on a multimedia database, the multimedia file is contained in the multimedia database, and the multimedia file corresponds to the data block.
12. A computer device comprising a processor and a memory having stored therein at least one instruction, at least one program, code set or instruction set that is loaded and executed by the processor to implement the interactive implementation of any of the preceding claims 1 to 10.
13. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by a processor to implement the interactive implementation of any one of claims 1 to 10.
14. An interactive implementation method applied to a server of an applet is characterized by comprising the following steps:
storing a multimedia database, wherein the multimedia database comprises a plurality of multimedia files;
the method comprises the steps of receiving information of a data block, wherein the data block is obtained by searching and comparing the information of to-be-compared characteristic points of a page with a page characteristic library to obtain a corresponding characteristic point data page and searching and comparing the corresponding characteristic point data page through the position information of a reference object in an original page picture, and the page characteristic point information is obtained by extracting characteristic points of the original page picture; the page feature library comprises a plurality of feature point data pages, each feature point data page comprises at least one data block, and the multimedia file and the data blocks are in corresponding relation;
Obtaining a corresponding multimedia file;
and outputting the multimedia file to an applet, and loading the multimedia file by the applet and broadcasting.
15. The interactive implementing method of claim 14, wherein the step of receiving the information of the data block further comprises:
and storing a page feature library.
16. The interactive implementing method of claim 15, further comprising, before the step of receiving the information of the data block:
receiving cover to-be-compared characteristic point information, wherein the cover to-be-compared characteristic point information is obtained by extracting characteristic points from original cover pictures;
comparing the cover to-be-compared characteristic points with a cover characteristic sub-library, wherein the cover characteristic sub-library corresponds to the printed matter sub-library one by one, the page characteristic library comprises a printed matter sub-library, and the printed matter sub-library comprises a plurality of characteristic point data pages;
obtaining a corresponding printed matter sub-library;
and outputting the printed matter sub-library.
17. The method for implementing interaction according to any one of claims 14 and 15, wherein the step of receiving the information of the data block specifically includes:
receiving page feature point information to be compared and position information of a reference object in an original page picture, wherein the page feature point information is obtained by extracting feature points of the original page picture;
Comparing the page to-be-compared characteristic point information with a page characteristic library and obtaining a corresponding characteristic point data page;
obtaining a data block pointed by a reference object from the position information of the reference object in the original page picture;
data block information is received.
18. The interactive implementation method according to claim 17, wherein the step of comparing the feature point information of the page to be compared with the page feature library and obtaining the corresponding feature point data page specifically comprises:
extracting hash values from feature descriptors corresponding to the feature points extracted from the original page picture;
comparing the hash value with the hash value of the characteristic point data page of the page characteristic library and voting;
according to the scoring result of the voting, N characteristic point data pages of N before the scoring result are selected as candidate results, wherein N is an integer greater than 1;
if the score result of the voting is larger than a preset threshold value and the score proportion among the candidate results has step change, the characteristic point data page with the same hash value and the largest value is used as the corresponding characteristic point data page.
19. The interactive implementation method according to claim 17, wherein the step of obtaining the data block pointed by the reference object from the position information of the reference object in the original picture of the page specifically includes:
Establishing a corresponding relation between the characteristic points to be compared of the page and the characteristic point data page;
obtaining the position of a reference object in a characteristic point data page from the position information of the reference object in the original page picture;
and obtaining a corresponding data block.
20. An interactive implementation server, comprising:
a multimedia storage module for storing a multimedia database, wherein the multimedia database comprises a plurality of multimedia files;
the data block receiving module is used for receiving information of a data block sent by the applet, wherein the data block is obtained by searching and comparing the characteristic point information to be compared of the page with a page characteristic library to obtain a corresponding characteristic point data page and searching and comparing the corresponding characteristic point data page with the position information of a reference object in an original page picture, and the page characteristic point information is obtained by extracting characteristic points of the original page picture; the page feature library comprises a plurality of feature point data pages, each feature point data page comprises at least one data block, and the multimedia file and the data blocks are in corresponding relation;
the multimedia obtaining module is used for obtaining the corresponding multimedia file;
and the multimedia output module is used for outputting the multimedia file to the applet, and loading and broadcasting the multimedia file by the applet.
21. A computer device comprising a processor and a memory having stored therein at least one instruction, at least one program, code set or instruction set that is loaded and executed by the processor to implement the interactive implementation of any of the preceding claims 14 to 19.
22. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the interactive implementation method of any one of claims 14-19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010878454.4A CN112199522B (en) | 2020-08-27 | 2020-08-27 | Interactive implementation method, terminal, server, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010878454.4A CN112199522B (en) | 2020-08-27 | 2020-08-27 | Interactive implementation method, terminal, server, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112199522A CN112199522A (en) | 2021-01-08 |
CN112199522B true CN112199522B (en) | 2023-07-25 |
Family
ID=74005099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010878454.4A Active CN112199522B (en) | 2020-08-27 | 2020-08-27 | Interactive implementation method, terminal, server, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112199522B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005115684A (en) * | 2003-10-08 | 2005-04-28 | Canon Inc | Content search device and content search method |
CN110555435A (en) * | 2019-09-10 | 2019-12-10 | 深圳一块互动网络技术有限公司 | Point-reading interaction realization method |
CN110569818A (en) * | 2019-09-13 | 2019-12-13 | 深圳一块互动网络技术有限公司 | intelligent reading learning method |
CN110704684A (en) * | 2019-10-17 | 2020-01-17 | 北京字节跳动网络技术有限公司 | Video searching method and device, terminal and storage medium |
CN110807388A (en) * | 2019-10-25 | 2020-02-18 | 深圳追一科技有限公司 | Interaction method, interaction device, terminal equipment and storage medium |
CN110930268A (en) * | 2018-09-20 | 2020-03-27 | 阿里巴巴集团控股有限公司 | Education coaching system and data processing method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7672543B2 (en) * | 2005-08-23 | 2010-03-02 | Ricoh Co., Ltd. | Triggering applications based on a captured text in a mixed media environment |
US8615719B2 (en) * | 2005-09-14 | 2013-12-24 | Jumptap, Inc. | Managing sponsored content for delivery to mobile communication facilities |
CN104023250B (en) * | 2014-06-13 | 2015-10-21 | 腾讯科技(深圳)有限公司 | Based on the real-time interactive method and system of Streaming Media |
US20170171471A1 (en) * | 2015-12-14 | 2017-06-15 | Le Holdings (Beijing) Co., Ltd. | Method and device for generating multimedia picture and an electronic device |
-
2020
- 2020-08-27 CN CN202010878454.4A patent/CN112199522B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005115684A (en) * | 2003-10-08 | 2005-04-28 | Canon Inc | Content search device and content search method |
CN110930268A (en) * | 2018-09-20 | 2020-03-27 | 阿里巴巴集团控股有限公司 | Education coaching system and data processing method |
CN110555435A (en) * | 2019-09-10 | 2019-12-10 | 深圳一块互动网络技术有限公司 | Point-reading interaction realization method |
CN110569818A (en) * | 2019-09-13 | 2019-12-13 | 深圳一块互动网络技术有限公司 | intelligent reading learning method |
CN110704684A (en) * | 2019-10-17 | 2020-01-17 | 北京字节跳动网络技术有限公司 | Video searching method and device, terminal and storage medium |
CN110807388A (en) * | 2019-10-25 | 2020-02-18 | 深圳追一科技有限公司 | Interaction method, interaction device, terminal equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
海量多媒体图像信息高效检索算法优化及仿真;韦必忠;魏红;英红;;计算机仿真(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112199522A (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106326391B (en) | Multimedia resource recommendation method and device | |
US20180300358A1 (en) | Image Retrieval Method and System | |
Chu et al. | Image Retrieval Based on a Multi‐Integration Features Model | |
US10860877B2 (en) | Logistics parcel picture processing method, device and system | |
CN103955499B (en) | A kind of visual experience Enhancement Method based on instant computing and dynamic tracing | |
US20130114900A1 (en) | Methods and apparatuses for mobile visual search | |
CN111460185A (en) | Book searching method, device and system | |
CN107590267A (en) | Information-pushing method and device, terminal and readable storage medium storing program for executing based on picture | |
CN112819073B (en) | Classification network training, image classification method and device and electronic equipment | |
KR20190124436A (en) | Method for searching building based on image and apparatus for the same | |
CN113657273B (en) | Method, device, electronic equipment and medium for determining commodity information | |
CN110059212A (en) | Image search method, device, equipment and computer readable storage medium | |
CN114168768A (en) | Image retrieval method and related equipment | |
CN109886781B (en) | Product recommendation method, device, equipment and storage medium based on painting behaviors | |
CN113157962B (en) | Image retrieval method, electronic device, and storage medium | |
CN112199522B (en) | Interactive implementation method, terminal, server, computer equipment and storage medium | |
CN115797291B (en) | Loop terminal identification method, loop terminal identification device, computer equipment and storage medium | |
CN114973293B (en) | Similarity judging method, key frame extracting method and device, medium and equipment | |
CN111881338A (en) | Printed matter content retrieval method based on social software light application applet | |
CN112214639B (en) | Video screening method, video screening device and terminal equipment | |
CN114610942A (en) | Image retrieval method and device based on joint learning, storage medium and electronic equipment | |
CN112765394A (en) | Data processing method and device, electronic equipment and storage medium | |
CN113821689A (en) | Pedestrian retrieval method and device based on video sequence and electronic equipment | |
CN110781345B (en) | Video description generation model obtaining method, video description generation method and device | |
CN113920406A (en) | Neural network training and classifying method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240104 Address after: 1406, 14th Floor, Building 2, No.1 Courtyard, Shangdi 10th Street, Haidian District, Beijing, 100080 Patentee after: Beijing Anxin Zhitong Technology Co.,Ltd. Address before: Room 403, C4, building 2, software industry base, No. 87, 89, 91, South 10th Road, Gaoxin, Binhai community, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Patentee before: Shenzhen yikuai Interactive Network Technology Co.,Ltd. |
|
TR01 | Transfer of patent right |