CN113111249A - Search processing method and device, electronic equipment and storage medium - Google Patents

Search processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113111249A
CN113111249A CN202110280922.2A CN202110280922A CN113111249A CN 113111249 A CN113111249 A CN 113111249A CN 202110280922 A CN202110280922 A CN 202110280922A CN 113111249 A CN113111249 A CN 113111249A
Authority
CN
China
Prior art keywords
search
statement
target image
image
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110280922.2A
Other languages
Chinese (zh)
Inventor
刘俊启
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110280922.2A priority Critical patent/CN113111249A/en
Publication of CN113111249A publication Critical patent/CN113111249A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9532Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Abstract

The present disclosure relates to search processing methods and apparatuses, electronic devices, and storage media, and in particular, to the field of artificial intelligence technologies such as natural language processing, computer vision, and intelligent search. The specific implementation scheme is as follows: acquiring a search statement; determining a type of the search statement; under the condition that the search statement is of a preset type, displaying a preset control on a search interface; acquiring a target image to be identified; and acquiring and returning a search result based on the search statement and the target image. Therefore, the system search function can be improved, the search result can be determined more accurately, the search requirement of the user can be met as much as possible, the search processing efficiency and accuracy are improved, and the user can be provided with better use feeling.

Description

Search processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the field of artificial intelligence technologies such as natural language processing, computer vision, and intelligent search, and in particular, to a search processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of computer technology, people are accustomed to obtaining information through the mobile internet. When information search is performed, a single text search may be performed, and an intended search result may not be obtained. Therefore, how to improve the system search function so as to better meet the user requirements becomes a problem to be solved urgently at present.
Disclosure of Invention
The disclosure provides a search processing method, a search processing device, an electronic device and a storage medium.
In one aspect of the present disclosure, a search processing method is provided, including:
acquiring a search statement;
determining a type of the search statement;
under the condition that the search statement is of a preset type, displaying a preset control on a search interface;
acquiring a target image to be identified;
and acquiring and returning a search result based on the search statement and the target image.
In another aspect of the present disclosure, there is provided a search processing apparatus including:
the first acquisition module is used for acquiring a search statement;
a determining module for determining a type of the search statement;
the display module is used for displaying a preset control on a search interface under the condition that the search statement is of a preset type;
the second acquisition module is used for acquiring a target image to be identified;
and the third acquisition module is used for acquiring and returning a search result based on the search statement and the target image.
In another aspect of the present disclosure, an electronic device is provided, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a search processing method as described in embodiments of one aspect above.
In another aspect of the present disclosure, a non-transitory computer-readable storage medium storing thereon a computer program for causing a computer to execute a search processing method according to an embodiment of the above-described aspect is provided.
In another aspect of the present disclosure, a computer program product is provided, which includes a computer program, and when being executed by a processor, the computer program implements the search processing method according to the embodiment of the above aspect.
According to the search processing method, the search processing device, the electronic equipment and the storage medium, the search sentence can be obtained firstly, then the type of the search sentence is determined, under the condition that the search sentence is of the preset type, the preset control can be displayed on the search interface, then the target image to be identified is obtained, and then the search result can be obtained and returned based on the search sentence and the target image. Therefore, the system search function can be improved, the search result can be determined more accurately, the search requirement of the user can be met as much as possible, the search processing efficiency and accuracy are improved, and the user can be provided with better use feeling.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flowchart of a search processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a search processing method according to another embodiment of the disclosure;
fig. 3 is a schematic flowchart of a search processing method according to another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a search processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing a search processing method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
And data processing, namely data acquisition, storage, retrieval, processing, transformation and transmission. The data processing has different modes according to different structural modes and working modes of the processing equipment and different time and space distribution modes of the data. Different processing methods require different hardware and software support. Each processing mode has its own characteristics, and an appropriate processing mode should be selected according to the actual environment of the application problem.
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning technology, a deep learning technology, a big data processing technology, a knowledge map technology and the like.
Natural language processing is the computer processing, understanding and use of human languages (such as chinese, english, etc.), which is a cross discipline between computer science and linguistics, also commonly referred to as computational linguistics. Since natural language is the fundamental mark that humans distinguish from other animals. Without language, human thinking has not been talk about, so natural language processing embodies the highest task and context of artificial intelligence, that is, only when a computer has the capability of processing natural language, the machine has to realize real intelligence.
Computer vision is a interdisciplinary field of science, studying how computers gain a high level of understanding from digital images or videos. From an engineering point of view, it seeks for an automated task that the human visual system can accomplish. Computer vision tasks include methods of acquiring, processing, analyzing and understanding digital images, and methods of extracting high-dimensional data from the real world to produce numerical or symbolic information, for example, in the form of decisions.
Intelligent search is a new generation of search engines that incorporate artificial intelligence techniques. The system can provide the functions of traditional quick retrieval, relevance sequencing and the like, and also can provide the functions of user role registration, automatic user interest identification, semantic understanding of contents, intelligent informatization filtering, pushing and the like. The content retrieved by intelligent search is not information, and the intelligent analysis on the query condition mainly comprises the following two types: extracting effective components in the query conditions, including vocabularies and logical relations; and establishing an e-commerce knowledge base to acquire synonyms, near-synonyms and related words of the keywords.
A search processing method, apparatus, electronic device, and storage medium of the embodiments of the present disclosure are described below with reference to the accompanying drawings.
The search processing method of the embodiment of the present disclosure may be executed by a search processing apparatus provided in the embodiment of the present disclosure, and the apparatus may be configured in an electronic device. It is understood that the electronic device may be configured at a server.
Fig. 1 is a schematic flowchart of a search processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the search processing method may include the following steps:
step 101, a search statement is obtained.
The search sentence may be text information or may also be voice information, and may be processed to obtain corresponding text information, which is not limited in this disclosure.
Step 102, determining the type of the search statement.
The search statement may be any statement, for example: "today weather", "what type of dog the husky belongs to", "what flower this is", "do the two flowers differ", etc., and the disclosure does not limit this.
In addition, the type of the search statement may be a preset type, a non-preset type, and the like, which is not limited in this disclosure.
The preset type can be any type specified in advance according to needs. For example, for a preset type of search sentence, it may be any sentence containing at least one "pronoun", for example, it may be "what this is", "what flowers this is", "what dogs are of the same breed", and so on, which is not limited by the present disclosure.
In addition, the non-preset type may be any other type than the preset type, and the non-preset type of the search term may be any other search term than the preset type, which is not limited in this disclosure.
It is understood that a preset type of search statement may include a specific pronoun, or there is an unknown situation in the search statement, or there is no specific reference content, etc., and it is necessary to determine the specific reference content by means of other auxiliary information, etc., and this disclosure does not limit this.
In addition, there may be a variety of ways in determining the type of search statement.
For example, the search term may be matched with each preset template term, and the type of the search term may be determined according to the matching degree between the search term and each preset template term.
The preset template sentence can be any template sentence set in advance. For example, the following can be taken: "what this is", "who a is", "what X this is", "are the same YY do these two XX", where a can be any pronoun, such as: these, her, his, etc., pronouns, X, XX and YY may be any noun, such as may be an animal, plant, object, etc., to which this disclosure is not limited.
In addition, when determining the matching degree between the search sentence and each preset template sentence, matching may be performed according to the component structure of the sentence, and then each matching degree is determined, or each matching degree may also be determined by using methods such as semantic similarity, which is not limited by the present disclosure.
For example, a threshold may be set in advance, after the search statement is matched with each preset template statement, the maximum matching degree is selected, the maximum matching degree is compared with the threshold, and when the matching degree is greater than the threshold, the search statement may be determined to be of the preset type.
For example, the search statement is "what dog is", and is matched with each preset template statement, and the matching degree of the search statement and the template statement "what X is" is highest and is greater than a set threshold value, so that it can be determined that the search statement is of the preset type.
It should be noted that the above examples are only illustrative, and cannot be taken as a limitation on the template statements and the like preset in the embodiments of the present disclosure.
Alternatively, the type of the search term may be determined based on a keyword and/or a corresponding search intention included in the search term.
The keyword may be any word important for semantic understanding, such as a noun, a pronoun, a verb, and the like, which is not limited in this disclosure.
For example, the search term is "what brand is", the keywords included therein are "this", "yes" and "brand", and the pronoun "this" refers to an unknown, so that it can be determined that the search term is of a preset type.
Or, the search sentence is "how much money" and the corresponding search intention is: the price is inquired, the search sentence has no specific reference object, so that the type of the search sentence can be determined as the preset type.
Or, the search statement is "what dog this is", the keywords contained therein are "this", "yes" and "dog", and the corresponding search intention is: the breed of the dog is inquired, so that the search sentence can be determined to be a preset type according to the keyword and the search intention.
It should be noted that the above examples are merely illustrative, and should not be taken as limitations on the search term, the keyword, the search intention, the type of the search term, and the like included in the search term in the embodiments of the present disclosure.
And 103, displaying a preset control on a search interface under the condition that the search statement is of a preset type.
The sentence of the preset type has the conditions of unclear reference, fuzzy content and the like, so that the correct search result can be accurately determined only by means of text search, which cannot provide the correct search result, and other search modes such as image recognition and the like.
In addition, the preset control may be any control set in advance, for example, the preset control may be: an image upload control, a photograph upload control, and the like, which are not limited in this disclosure.
For example, the search term is "what flower this is", and it has been determined that the search term is of a preset type. If the text search is directly carried out on the flower type display screen, the flower type, the flower name and the like cannot be accurately determined due to the fact that no information related to the flower exists, at the moment, a preset image uploading control can be displayed on a search interface, the flower type can be determined by means of the image information, and the problem of a user can be solved.
It should be noted that the above examples are only illustrative, and should not be taken as limitations on search terms, types, and the like in the embodiments of the present disclosure.
And 104, acquiring a target image to be identified.
The user can upload the target image to be identified by triggering the preset control displayed on the search interface, and then the server can acquire the target image to be identified.
Or, after triggering the preset control, the user may use a photographing upload or other related control to photograph and upload a picture, and then upload a target image to be recognized, and then the server may obtain the target image to be recognized.
It should be noted that the above example is only an illustrative example, and cannot be taken as a limitation on uploading a target image to be recognized and the like in the embodiment of the present disclosure.
And 105, acquiring and returning a search result based on the search sentence and the target image.
The search sentence and the target image can be combined to perform search identification together, so that the search result is determined.
For example, the search term is "what flower this is", and it has been determined that the search term is of a preset type. The entire target image may then be identified after it has been acquired. And then, determining the flower characteristics according to the identification result, and then searching and searching according to the flower characteristics, for example, searching by using various databases or searching by using a search engine, and the like, so that the search result can be returned to the user.
Alternatively, the search term "find flowers that are close to such flowers", and it has been determined that the search term is of a preset type. After the target image is acquired, the image may be segmented first, and an image segment corresponding to the flower in the target image may be determined first. And then, only the image segment corresponding to the 'flower' is further identified, for example, according to the characteristics of petals, calyx and the like in the image segment, searching and searching are carried out in an image knowledge base related to the flower, so that other flowers close to the flower can be determined.
It should be noted that the above examples are only illustrative, and should not be taken as limitations on obtaining and returning search results and the like in the embodiments of the present disclosure.
According to the embodiment of the disclosure, the search sentence can be obtained first, then the type of the search sentence is determined, under the condition that the search sentence is of the preset type, the preset control can be displayed on the search interface, then the target image to be identified is obtained, and then the search result can be obtained and returned based on the search sentence and the target image. Therefore, the system search function can be improved, the search result can be determined more accurately, the search requirement of the user can be met as much as possible, the search processing efficiency and accuracy are improved, and the user can be provided with better use feeling.
In the embodiment, the target image to be identified is further acquired under the condition that the search sentence is of the preset type, and then an accurate search result can be determined according to the search sentence and the target image. It can be understood that, when obtaining the search result, the search statement may be analyzed first, and then the target image is correspondingly identified and processed according to whether the search statement includes the keyword of the specified type, so as to determine the search result, which is further described with reference to fig. 2.
Fig. 2 is a schematic flowchart of a search processing method according to an embodiment of the present disclosure. As shown in fig. 2, the search processing method may include the following steps:
step 201, a search statement is obtained.
At step 202, the type of the search statement is determined.
And 203, displaying a preset control on a search interface under the condition that the search statement is of a preset type.
And step 204, acquiring a target image to be identified.
Step 205, the search sentence is parsed to determine the keywords contained in the search sentence.
The keyword may be any word important for semantic understanding, such as a noun, a pronoun, an adjective, a verb, and the like, which is not limited in this disclosure.
In addition, there may be a variety of ways in determining keywords contained in a search term.
For example, after the search sentence is analyzed, the part-of-speech of each word may be determined, and then the words such as nouns, pronouns, verbs, etc. may be determined as the keywords.
For example, the search sentence is "what dog this is," this "is a pronoun," yes "is a verb," dog "is a noun, and" this, "yes," and "dog" can be determined as the keyword.
Alternatively, after the search sentence is parsed, the keyword may be determined from the sentence component, and for example, a word such as "subject" or "object" having a definite meaning may be determined as the keyword.
For example, the search sentence is "what flower this is," which is a subject, "which is a predicate," which is a word having a definite meaning in the sentence, "which" flower "is a word, so that" this "and" flower "can be determined as the keyword.
It should be noted that the above examples are merely illustrative, and cannot be used as a limitation to keywords, determined keywords, and the like in the embodiments of the present disclosure.
And step 206, under the condition that the search sentence comprises at least one keyword of the specified type, segmenting the target image to obtain an image segment where the object matched with the keyword of the specified type is located.
The specified type of keyword may be a noun, etc., which may represent a specific object, etc., and this disclosure does not limit this.
In addition, the search sentence may include a keyword of a specific type, or may also include a plurality of keywords of specific types, which is not limited in this disclosure.
For example, the search sentence is "what kind the dog and cat are", and after analyzing the search sentence, the keywords are "this", "dog", "cat", "yes" and "kind". The keywords of a specific type are nouns which can represent a specific type of object, and the keywords of the specific type are both "dog" and "cat". And then, segmenting the target image, thereby determining the image segment of the dog and the image segment of the cat in the target image.
It should be noted that the above examples are only illustrative, and should not be taken as limitations on search sentences, keywords, image segments in which objects of keyword matching are located, and the like in the embodiments of the present disclosure.
And step 207, acquiring and returning a search result based on the image segment and the search statement.
For example, the search sentence is "what dog this is", the specified type of keyword is a noun, and the specified type of keyword is included in the noun, and then after the target image is segmented, the image segment where the "dog" is located in the target image can be determined. Then, according to the search statement, the image segment where the dog is located is selected to be further searched and identified, so that the breed of the dog is determined, and then the breed of the dog can be returned to the user as a search result.
It should be noted that the above examples are only illustrative, and should not be taken as limitations on search sentences, keywords, image segments in which objects of keyword matching are located, and the like in the embodiments of the present disclosure.
And step 208, under the condition that the search sentence does not contain any keyword of a specified type, identifying the target image to determine each object contained in the target image, the image segment where each object is located and the position information of each object in the image.
The search sentence can be any sentence, so that after the search sentence is analyzed, the keyword of the specified type may not be contained, and the target image can be identified firstly in order to ensure the accuracy of the search result.
For example, the search sentence is "what these are varieties", the keyword of the specified type is a noun that can represent a specific object, such as a cat, a dog, a car, etc., and the keyword of the specified type is not included in the search sentence. At this time, the target image is recognized, so that each object contained in the target image can be determined: the image segments of the dog and the cat in the target image and the position information of the dog and the cat in the image can be further determined.
It should be noted that the above examples are merely illustrative, and are not intended to limit the search term, the target image, and the like in the implementation of the present disclosure.
Step 209, obtaining candidate search results based on the image segment and the search sentence where each object is located.
The image segment in which each object is located may be further identified, for example, information of the image segment in which each object is located may be searched and identified according to a search statement. Or, according to each search statement, selecting to search, match, etc. the image segment where each object is located with other image knowledge bases, etc., so as to determine the candidate search result corresponding to each image segment.
For example, the search sentence is "what these are varieties", which does not include the specified type of keywords, and then the target image is recognized to determine each object in the target image: "dog", "cat", "book", and image segments of each in the target image and position information of each in the image. According to the search sentence "what kinds these are", "dog", "cat" have the classification of the kind, and "bag" has no classification of the kind, so that the image segments where the "dog" and "cat" are located can be further identified, processed, and the like, and then the obtained results can be determined as each candidate search result.
It should be noted that the above examples are only illustrative, and should not be taken as limitations to obtaining various search results and the like in the implementation of the present disclosure.
And step 210, fusing the candidate search results according to the size of the image segment where each object is located and/or the position information of each object in the image to generate a search result to be returned.
The sizes of the image segments in which the objects are located may be the same or different. Therefore, the candidate search results can be fused according to the sizes of the image segments corresponding to the objects.
For example, the size of the image segment in which the object a is located is 50 × 50, and the size of the image segment in which the object B is located is 10 × 5, it is known that the size of the image segment in which the object a is located is larger, the object a is a main object to be searched by the user, the size of the image segment in which the object B is located is smaller, and the object B is a secondary object or is not an object to be searched. The objects a and B correspond to 5 candidate search results respectively, so that when the candidate search results are merged, the candidate search results corresponding to 5 objects a and the candidate search results corresponding to 2 objects B, that is, the search results to be returned, can be displayed. The accuracy, correctness and comprehensiveness of the search results generated after fusion are high, the search requirements of users can be well met, and the search problem is solved.
It should be noted that the sizes of the objects A, B and their respective image segments, the number of candidate search results, and the like are merely illustrative, and are not intended to limit the sizes of the objects and their respective image segments, the candidate search results, and the like in the embodiments of the present disclosure.
Alternatively, the candidate search results may be fused based on the position information of each object in the target image. For example, the object a corresponds to 6 candidate search results at the center of the target image, and the object B corresponds to 6 candidate search results at the lower left of the target image. Fusing the candidate search results, and obtaining the search results which can be: XX is located in the center of the image and YY is located at the lower left of the image.
It should be noted that the above-mentioned objects A, B, their respective position information in the target image, the number of candidate search results, and the like are merely illustrative, and are not intended to be limitations of the respective objects, their respective position information in the target image, their respective candidate search results, and the like in the embodiments of the present disclosure.
Alternatively, the candidate search results may be fused according to the size of the image segment in which each object is located and the position information of each object in the target image.
For example, the object a is located on the left side of the target image, and the size of the corresponding image segment is 43 × 35, which corresponds to 7 candidate search results, and the object B is located on the right side of the target image, and the size of the corresponding image segment is 3 × 3, which corresponds to 3 candidate search results. Then, the candidate search results corresponding to the objects 1 and 3 are fused, so that: XX at the left side of the image and YY at the right side of the image.
It should be noted that the sizes, the position information, the number of candidate search results, and the like of the objects A, B and their respective image segments are merely illustrative, and are not intended to limit the sizes, the position information, the candidate search results, and the like of the objects and their respective image segments in the embodiment of the present disclosure.
According to the method and the device for searching the sentences, the types of the search sentences can be determined firstly, the preset controls are displayed on a search interface under the condition that the search sentences are of the preset types, then the target images to be identified are obtained, the search sentences are analyzed, when the search sentences contain the keywords of the specified types, the target images can be segmented to obtain the image segments where the objects matched with the keywords of the specified types are located, and then the search results are obtained and returned based on the image segments and the search sentences. Or, when the search statement does not include the keyword of the specified type, the target image may be identified first to determine each object included in the target image, the image segment where each object is located, and the position information of each object in the image, and then the search result to be returned is generated according to each obtained candidate search result. Therefore, the search sentence can be analyzed under the condition that the search sentence is of the preset type, and the search result is generated in a corresponding mode according to whether the search sentence contains the keywords of the specified type, so that the search function of the system is more perfect, the obtained search information is more accurate, the search requirement of the user is met as much as possible, the information search processing efficiency and accuracy are improved, and the user can be provided with better use feeling.
In the above embodiment, when the search term is of the preset type, the search term is analyzed, and a search result is generated in a corresponding manner according to whether the search term includes the keyword of the specified type. In a possible implementation manner, when the search result is obtained, a search intention corresponding to the search sentence may be determined, and then the target image is processed according to the search intention, so as to generate the search result. The above process is described in detail below with reference to fig. 3.
Fig. 3 is a schematic flowchart of a search processing method according to an embodiment of the present disclosure.
As shown in fig. 3, the search processing method may include the following steps:
step 301, a search statement is obtained.
Step 302, determine the type of search statement.
And 303, displaying a preset control on a search interface under the condition that the search statement is of a preset type.
And step 304, acquiring a target image to be identified.
Step 305, determining the keywords contained in the search sentence and the corresponding search intention.
There are various ways to determine the keywords contained in the search term. For example, a preset template may be used to determine the keywords contained in the search sentence, or the keywords contained in the search sentence may also be determined by means of entity relationship extraction, part-of-speech tagging, and the like, which is not limited by the present disclosure.
In addition, the search intention corresponding to the search sentence can be determined by using the related technology. For example, the sentence may be parsed first, and then the search intention corresponding to the search sentence may be determined by using a named entity identifier, etc., which is not limited in this disclosure.
The above examples are merely illustrative, and are not intended to limit the manner in which the search term is analyzed in the embodiments of the present disclosure.
And step 306, under the condition that the search statement does not contain any keyword of the specified type, determining the target type of the object to be acquired according to the search intention.
For example, the search term is "how much money" and after analyzing the search term, it can be determined that the search term does not include the specified type of keyword, and the search intention of the search term is: the price is asked so that, according to the search intention, it can be determined that the target type of the object to be obtained can be any object that can be measured in money, such as an article, an animal, a plant, etc.
It should be noted that the above examples are only examples, and cannot be used as limitations on search statements, target types of objects to be acquired, and the like in the embodiments of the present disclosure.
And 307, segmenting the target image to obtain an image segment where the object matched with the target type is located.
The target image can be segmented firstly, and then the image segment where the object matched with the target image is located can be determined in the segmented image segment according to the target type of the object to be obtained, so that the data amount of processing can be reduced, and the efficiency is improved.
For example, the search sentence is "how much money" and the search intent is: asking for the price, the target type of the object to be acquired may be an object, an animal, a plant, etc. The target image comprises a vehicle and a pedestrian, the vehicle can label the price, and the pedestrian cannot measure the price, so that the object matched with the target type can be determined as the vehicle, and then the image segment where the vehicle is located can be obtained.
It should be noted that the above examples are only examples, and cannot be used as limitations on search statements, target types of objects to be acquired, and the like in the embodiments of the present disclosure.
And 308, acquiring and returning a search result based on the image segment and the search statement.
The acquired image segments can be further processed, so that a more accurate search result can be determined.
For example, the search statement is "how much money is", and the acquired image segment is the image segment where the vehicle is located. The image segment may then be further processed, such as searched, looked up, identified based on the identification, specification, etc. of the vehicle, so that a price for the vehicle may be determined, which may then be returned to the user as a search result.
It should be noted that the above examples are only examples, and cannot be used as limitations on search statements, target types of objects to be acquired, and the like in the embodiments of the present disclosure.
It can be understood that after the intention of the search sentence is determined, feature extraction can be performed on the target image according to the search intention, and then a search result is determined.
Specifically, the intention recognition may be performed on the search sentence to determine the search intention, then the feature extraction may be performed on the target image according to the search intention to determine the feature information of the target image, and then the search result may be obtained and returned based on the feature information and the search intention.
When the target image is subjected to feature extraction after the search sentence is subjected to intention identification and the search intention is determined, the feature information which is related to the search intention and has a large influence on the search intention can be extracted with purposiveness and priority, and the information which has a small influence on the search intention and is unrelated to the search intention does not need to be extracted, so that the processing of redundant data can be reduced, and the efficiency is improved.
For example, the search sentence is "how much money is", the search intention is to ask for a price, and then feature extraction may be performed on the target image according to the search intention.
For example, the target image includes a vehicle and a dog, and the search intent is to ask for a price. When the features of the vehicle in the target image are extracted, features related to the price of the vehicle, such as a brand identifier of the vehicle, a model identifier of the vehicle, an external dimension of the vehicle, and the like, can be purposefully extracted. When the features of the dog in the target image are extracted, features related to the price of the dog, such as the body type, hair and the like of the dog, can be purposefully extracted. The extracted feature information of the vehicle and the extracted feature information of the dog may be subjected to search processing and the like, so as to determine information such as the price of the vehicle, the price and the type of the dog, and the price of the vehicle, the price and the type of the dog may be returned to the user as search results.
It should be noted that the above examples are merely illustrative, and are not intended to limit the search sentence, the search intention, the search result, and the like in the embodiments of the present disclosure.
According to the embodiment of the disclosure, the type of the search statement can be determined, a preset control is displayed on a search interface under the condition that the search statement is of the preset type, then the target image to be identified is obtained, the search statement is analyzed, so that the target type of the object to be obtained can be determined according to the search intention corresponding to the search statement, then the target image is segmented to obtain the image segment where the object matched with the target type is located, and therefore the search result can be obtained and returned based on the image segment and the search statement. Therefore, under the condition that the search sentence is of the preset type, the corresponding image segment can be determined according to the search intention corresponding to the search sentence, so that a more accurate search result is determined, the search function of the system is more perfect, and the acquired search information is more accurate.
In order to implement the above embodiments, the present disclosure also provides a search processing apparatus. Fig. 4 is a schematic structural diagram of a search processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, the search processing apparatus 400 includes: a first obtaining module 410, a determining module 420, a displaying module 430, a second obtaining module 440, and a third obtaining module 450.
The first obtaining module 410 is configured to obtain a search statement.
A determining module 420, configured to determine a type of the search statement.
And a display module 430, configured to display a preset control on a search interface when the search statement is of a preset type.
And a second obtaining module 440, configured to obtain a target image to be identified.
And a third obtaining module 450, configured to obtain and return a search result based on the search statement and the target image.
In a possible implementation manner, the third obtaining module 450 is specifically configured to analyze the search statement to determine a keyword included in the search statement; under the condition that the search sentence contains at least one keyword of a specified type, segmenting the target image to obtain an image segment where an object matched with the keyword of the specified type is located; and acquiring and returning a search result based on the image segment and the search statement.
In a possible implementation manner, the third obtaining module 450 is further specifically configured to, when the search statement does not include any keyword of a specified type, identify the target image to determine each object included in the target image, an image segment where each object is located, and position information of each object in the image; acquiring candidate search results based on the image segment where each object is located and the search statement; and fusing the candidate search results according to the size of the image segment where each object is located and/or the position information of each object in the image to generate a search result to be returned.
In a possible implementation manner, the third obtaining module 450 is further specifically configured to determine a keyword and a corresponding search intention included in the search statement; determining the target type of the object to be acquired according to the search intention under the condition that the search statement does not contain any keyword of a specified type; segmenting the target image to obtain an image segment where an object matched with the target type is located; and acquiring and returning a search result based on the image segment and the search statement.
In a possible implementation manner, the third obtaining module 450 is further specifically configured to perform intent recognition on the search statement to determine a search intent; according to the search intention, performing feature extraction on the target image to determine feature information of the target image; and obtaining and returning a search result based on the feature information and the search intention.
In a possible implementation manner, the determining module 420 is specifically configured to match the search statement with each preset template statement, and determine the type of the search statement according to the matching degree between the search statement and each preset template statement.
In a possible implementation manner, the determining module 420 is further specifically configured to determine the type of the search statement according to a keyword and/or a corresponding search intention included in the search statement.
The functions and specific implementation principles of the modules in the embodiments of the present disclosure may refer to the embodiments of the methods, and are not described herein again.
The search processing device of the embodiment of the disclosure may acquire the search statement first, then determine the type of the search statement, may display a preset control on the search interface under the condition that the search statement is of the preset type, then acquire the target image to be identified, and then may acquire and return the search result based on the search statement and the target image. Therefore, the system search function can be improved, the search result can be determined more accurately, the search requirement of the user can be met as much as possible, the search processing efficiency and accuracy are improved, and the user can be provided with better use feeling.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized artificial intelligence (A I) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 executes the respective methods and processes described above, such as the search processing method. For example, in some embodiments, the search processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the search processing method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the search processing method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
According to the technical scheme, the search sentence can be obtained firstly, then the type of the search sentence is determined, under the condition that the search sentence is of the preset type, the preset control can be displayed on the search interface, then the target image to be identified is obtained, and then the search result can be obtained and returned based on the search sentence and the target image. Therefore, the system search function can be improved, the search result can be determined more accurately, the search requirement of the user can be met as much as possible, the search processing efficiency and accuracy are improved, and the user can be provided with better use feeling.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A search processing method, comprising:
acquiring a search statement;
determining a type of the search statement;
under the condition that the search statement is of a preset type, displaying a preset control on a search interface;
acquiring a target image to be identified;
and acquiring and returning a search result based on the search statement and the target image.
2. The method of claim 1, wherein the obtaining and returning search results based on the search term and the target image comprises:
analyzing the search statement to determine a keyword contained in the search statement;
under the condition that the search sentence contains at least one keyword of a specified type, segmenting the target image to obtain an image segment where an object matched with the keyword of the specified type is located;
and acquiring and returning a search result based on the image segment and the search statement.
3. The method of claim 2, wherein after the determining the keywords contained in the search sentence, further comprising:
under the condition that the search statement does not contain any keyword of a specified type, identifying the target image to determine each object contained in the target image, an image segment where each object is located and position information of each object in the image;
acquiring candidate search results based on the image segment where each object is located and the search statement;
and fusing the candidate search results according to the size of the image segment where each object is located and/or the position information of each object in the image to generate a search result to be returned.
4. The method of claim 1, wherein the obtaining and returning search results based on the search term and the target image comprises:
determining keywords contained in the search sentence and corresponding search intents;
determining the target type of the object to be acquired according to the search intention under the condition that the search statement does not contain any keyword of a specified type;
segmenting the target image to obtain an image segment where an object matched with the target type is located;
and acquiring and returning a search result based on the image segment and the search statement.
5. The method of any one of claims 1-4, wherein said obtaining and returning search results based on the search term and the target image comprises:
performing intention recognition on the search statement to determine a search intention;
according to the search intention, performing feature extraction on the target image to determine feature information of the target image;
and obtaining and returning a search result based on the feature information and the search intention.
6. The method of any of claims 1-4, wherein the determining the type of the search statement comprises:
and matching the search sentences with the preset template sentences, and determining the types of the search sentences according to the matching degree of the search sentences and the preset template sentences.
7. The method of any of claims 1-4, wherein the determining the type of the search statement comprises:
and determining the type of the search statement according to the keywords and/or the corresponding search intention contained in the search statement.
8. A search processing apparatus comprising:
the first acquisition module is used for acquiring a search statement;
a determining module for determining a type of the search statement;
the display module is used for displaying a preset control on a search interface under the condition that the search statement is of a preset type;
the second acquisition module is used for acquiring a target image to be identified;
and the third acquisition module is used for acquiring and returning a search result based on the search statement and the target image.
9. The apparatus of claim 8, wherein the third obtaining module is specifically configured to:
analyzing the search statement to determine a keyword contained in the search statement;
under the condition that the search sentence contains at least one keyword of a specified type, segmenting the target image to obtain an image segment where an object matched with the keyword of the specified type is located;
and acquiring and returning a search result based on the image segment and the search statement.
10. The apparatus of claim 9, wherein the third obtaining module is further specifically configured to:
under the condition that the search statement does not contain any keyword of a specified type, identifying the target image to determine each object contained in the target image, an image segment where each object is located and position information of each object in the image;
acquiring candidate search results based on the image segment where each object is located and the search statement;
and fusing the candidate search results according to the size of the image segment where each object is located and/or the position information of each object in the image to generate a search result to be returned.
11. The apparatus of claim 8, wherein the third obtaining module is further specifically configured to:
determining keywords contained in the search sentence and corresponding search intents;
determining the target type of the object to be acquired according to the search intention under the condition that the search statement does not contain any keyword of a specified type;
segmenting the target image to obtain an image segment where an object matched with the target type is located;
and acquiring and returning a search result based on the image segment and the search statement.
12. The apparatus according to any one of claims 8 to 11, wherein the third obtaining module is further specifically configured to:
performing intention recognition on the search statement to determine a search intention;
according to the search intention, performing feature extraction on the target image to determine feature information of the target image;
and obtaining and returning a search result based on the feature information and the search intention.
13. The apparatus according to any of claims 8-11, wherein the determining module is specifically configured to:
and matching the search sentences with the preset template sentences, and determining the types of the search sentences according to the matching degree of the search sentences and the preset template sentences.
14. The apparatus according to any of claims 8-11, wherein the determining module is specifically configured to:
and determining the type of the search statement according to the keywords and/or the corresponding search intention contained in the search statement.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202110280922.2A 2021-03-16 2021-03-16 Search processing method and device, electronic equipment and storage medium Pending CN113111249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110280922.2A CN113111249A (en) 2021-03-16 2021-03-16 Search processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110280922.2A CN113111249A (en) 2021-03-16 2021-03-16 Search processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113111249A true CN113111249A (en) 2021-07-13

Family

ID=76711395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110280922.2A Pending CN113111249A (en) 2021-03-16 2021-03-16 Search processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113111249A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103412938A (en) * 2013-08-22 2013-11-27 成都数之联科技有限公司 Commodity price comparing method based on picture interactive type multiple-target extraction
CN104731776A (en) * 2015-03-27 2015-06-24 百度在线网络技术(北京)有限公司 Providing method and system of translating information
WO2017024884A1 (en) * 2015-08-07 2017-02-16 广州神马移动信息科技有限公司 Search intention identification method and device
CN107766582A (en) * 2017-11-27 2018-03-06 深圳市唯特视科技有限公司 A kind of image search method based on target regional area
CN108090126A (en) * 2017-11-14 2018-05-29 维沃移动通信有限公司 Image processing method, device and mobile terminal, image-recognizing method and server
US20190005070A1 (en) * 2017-06-30 2019-01-03 Baidu Online Network Technology (Beijing) Co., Ltd . Emoji searching method and apparatus
US20190311070A1 (en) * 2018-04-06 2019-10-10 Microsoft Technology Licensing, Llc Method and apparatus for generating visual search queries augmented by speech intent
CN110955818A (en) * 2019-12-04 2020-04-03 深圳追一科技有限公司 Searching method, searching device, terminal equipment and storage medium
CN111046203A (en) * 2019-12-10 2020-04-21 Oppo广东移动通信有限公司 Image retrieval method, image retrieval device, storage medium and electronic equipment
CN111177467A (en) * 2019-12-31 2020-05-19 京东数字科技控股有限公司 Object recommendation method and device, computer-readable storage medium and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103412938A (en) * 2013-08-22 2013-11-27 成都数之联科技有限公司 Commodity price comparing method based on picture interactive type multiple-target extraction
CN104731776A (en) * 2015-03-27 2015-06-24 百度在线网络技术(北京)有限公司 Providing method and system of translating information
WO2017024884A1 (en) * 2015-08-07 2017-02-16 广州神马移动信息科技有限公司 Search intention identification method and device
US20190005070A1 (en) * 2017-06-30 2019-01-03 Baidu Online Network Technology (Beijing) Co., Ltd . Emoji searching method and apparatus
CN108090126A (en) * 2017-11-14 2018-05-29 维沃移动通信有限公司 Image processing method, device and mobile terminal, image-recognizing method and server
CN107766582A (en) * 2017-11-27 2018-03-06 深圳市唯特视科技有限公司 A kind of image search method based on target regional area
US20190311070A1 (en) * 2018-04-06 2019-10-10 Microsoft Technology Licensing, Llc Method and apparatus for generating visual search queries augmented by speech intent
CN110955818A (en) * 2019-12-04 2020-04-03 深圳追一科技有限公司 Searching method, searching device, terminal equipment and storage medium
CN111046203A (en) * 2019-12-10 2020-04-21 Oppo广东移动通信有限公司 Image retrieval method, image retrieval device, storage medium and electronic equipment
CN111177467A (en) * 2019-12-31 2020-05-19 京东数字科技控股有限公司 Object recommendation method and device, computer-readable storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109726293B (en) Causal event map construction method, system, device and storage medium
US20220004714A1 (en) Event extraction method and apparatus, and storage medium
CN109815333B (en) Information acquisition method and device, computer equipment and storage medium
CN109492222B (en) Intention identification method and device based on concept tree and computer equipment
CN112579727B (en) Document content extraction method and device, electronic equipment and storage medium
CN106446018B (en) Query information processing method and device based on artificial intelligence
CN109086265B (en) Semantic training method and multi-semantic word disambiguation method in short text
CN112559684A (en) Keyword extraction and information retrieval method
CN114595686B (en) Knowledge extraction method, and training method and device of knowledge extraction model
CN113220836A (en) Training method and device of sequence labeling model, electronic equipment and storage medium
CN112926308B (en) Method, device, equipment, storage medium and program product for matching text
CN112765974B (en) Service assistance method, electronic equipment and readable storage medium
CN112925883B (en) Search request processing method and device, electronic equipment and readable storage medium
CN113032673A (en) Resource acquisition method and device, computer equipment and storage medium
CN112560461A (en) News clue generation method and device, electronic equipment and storage medium
CN110795942B (en) Keyword determination method and device based on semantic recognition and storage medium
US20220198358A1 (en) Method for generating user interest profile, electronic device and storage medium
CN112699237B (en) Label determination method, device and storage medium
CN113609847A (en) Information extraction method and device, electronic equipment and storage medium
CN112560425A (en) Template generation method and device, electronic equipment and storage medium
CN115719066A (en) Search text understanding method, device, equipment and medium based on artificial intelligence
CN113111249A (en) Search processing method and device, electronic equipment and storage medium
CN114492446A (en) Legal document processing method and device, electronic equipment and storage medium
CN113111248A (en) Search processing method and device, electronic equipment and storage medium
CN113434631A (en) Emotion analysis method and device based on event, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination