CN113343005A - Searching method, searching device, electronic equipment and readable storage medium - Google Patents

Searching method, searching device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113343005A
CN113343005A CN202110536185.8A CN202110536185A CN113343005A CN 113343005 A CN113343005 A CN 113343005A CN 202110536185 A CN202110536185 A CN 202110536185A CN 113343005 A CN113343005 A CN 113343005A
Authority
CN
China
Prior art keywords
entity
image
processed
user
triggerable control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110536185.8A
Other languages
Chinese (zh)
Inventor
刁佳佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110536185.8A priority Critical patent/CN113343005A/en
Publication of CN113343005A publication Critical patent/CN113343005A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a searching method, a searching device, electronic equipment and a readable storage medium, and relates to the technical field of image processing. The searching method comprises the following steps: acquiring an image to be processed; carrying out entity identification on the image to be processed, and displaying a triggerable control corresponding to the entity obtained by identification in the image to be processed; and under the condition that the user clicks the displayed triggerable control, the entity corresponding to the clicked triggerable control is selected in a frame mode, and the search result of the selected entity is provided for the user. The method and the device can meet different searching requirements of a user during image searching, simplify image searching steps, and improve the accuracy and efficiency of image searching.

Description

Searching method, searching device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of data processing technology, and in particular, to the field of image processing technology. A search method, apparatus, electronic device, and readable storage medium are provided.
Background
The mobile internet has become a main way for netizens to obtain information. Accordingly, mobile search is also becoming the primary way for users to search instead of PC search. With the popularization of artificial intelligence technology, mobile search is evolved from the initial text search to the current three search modes of image search, voice search and text search. The image search aims to enable a user to accurately obtain a search result of a certain entity in an image, but in the prior art, when the image search is carried out, the user needs to manually frame the entity in the image and then can carry out the search, so that the problems of more complex search steps and lower search efficiency and search accuracy are caused.
Disclosure of Invention
According to a first aspect of the present disclosure, there is provided a search method including: acquiring an image to be processed; carrying out entity identification on the image to be processed, and displaying a triggerable control corresponding to the identified entity in the image to be processed; and under the condition that the user clicks the displayed triggerable control, the entity corresponding to the clicked triggerable control is selected in a frame mode, and the search result of the selected entity is provided for the user.
According to a second aspect of the present disclosure, there is provided a search apparatus comprising: an acquisition unit for acquiring an image to be processed; the processing unit is used for carrying out entity identification on the image to be processed and displaying a triggerable control corresponding to the identified entity in the image to be processed; and the searching unit is used for framing the entity corresponding to the clicked triggerable control under the condition that the user is detected to click the displayed triggerable control, and providing the search result of the framed entity for the user.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method as described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method as described above.
According to the technical scheme, after the entity identification is carried out on the image to be processed, the entity in the image to be processed is firstly displayed in the form of the triggerable control, then the entity corresponding to the clicked triggerable control is selected in the image to be processed in a frame mode according to the click of the user on the displayed triggerable control, and the search result of the selected entity in the frame mode is provided for the user, so that different search requirements of the user during image search can be met, the image search steps are simplified, and the accuracy and the efficiency of the image search are improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become readily apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
fig. 3 is a block diagram of an electronic device for implementing a search method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. As shown in fig. 1, the search method of this embodiment may specifically include the following steps:
s101, acquiring an image to be processed;
s102, entity identification is carried out on the image to be processed, and a triggerable control corresponding to the entity obtained through identification is displayed in the image to be processed;
s103, under the condition that the user clicks the displayed triggerable control, the entity corresponding to the clicked triggerable control is selected in a frame mode, and the search result of the selected entity in the frame mode is provided for the user.
According to the searching method, after the entity identification is carried out on the image to be processed, the entity in the image to be processed is firstly displayed in the form of the triggerable control, then according to the clicking of the displayed triggerable control by the user, the entity corresponding to the clicked triggerable control in the image to be processed is selected in a frame mode, and the searching result of the selected entity is provided for the user, so that different searching requirements of the user during image searching can be met, the image searching steps are simplified, and the accuracy and the efficiency of image searching are improved.
In the embodiment, when S101 is executed to acquire an image to be processed, an image input by a user may be used as the image to be processed, or an image selected by the user when browsing a page may be used as the image to be processed. In this embodiment, the to-be-processed image obtained by performing S101 may include one or more entities, and the types of the entities included in the to-be-processed image may be a human face, an article, a building, and the like.
After the to-be-processed image is acquired in S101, in this embodiment, entity identification is performed on the acquired to-be-processed image in S102, and a triggerable control corresponding to the identified entity is displayed in the to-be-processed image.
In this embodiment, the triggerable control displayed in the image to be processed in step S102 may be a clickable button, the displayed triggerable control may be clickable by a user, and the number of the displayed triggerable controls is the same as the number of the entities included in the image to be processed.
Specifically, in this embodiment, when performing S102 to perform entity identification on an image to be processed and displaying a triggerable control corresponding to an identified entity in the image to be processed, an optional implementation manner that can be adopted is as follows: performing entity identification on the image to be processed to obtain coordinates of an entity in the image to be processed, for example, using positions of a left upper point and a right lower point of the entity in the image to be processed as coordinates of the entity; determining the central point of the entity according to the coordinate of the entity; and displaying a triggerable control corresponding to the entity at the determined center point.
That is to say, the triggerable control corresponding to the entity is generated and displayed in the image to be processed in the embodiment, and the displayed triggerable control is located at the center point of the entity, so that the user can determine the entity in the image to be processed more intuitively, and the efficiency of the user in searching the image is improved.
In addition, when S102 is executed to display the triggerable control corresponding to the identified entity in the image to be processed, the embodiment may further include the following: acquiring an identification result of an entity; and displaying the recognition result of the entity and the triggerable control corresponding to the entity in the image to be processed, for example, displaying the recognition result on one side of the triggerable control.
In this embodiment, after the triggerable control corresponding to the identified entity is displayed in the image to be processed in step S102, in step S103, when it is detected that the user clicks the displayed triggerable control, the entity corresponding to the clicked triggerable control is selected in a frame in the image to be processed, and a search result of the framed entity is provided to the user.
That is to say, in the embodiment, different entities in the image to be processed are framed and selected according to the click of the user on the triggerable control in the image to be processed; if the image to be processed contains a plurality of triggerable controls, the framed entity in the image to be processed can be correspondingly changed when the user clicks different triggerable controls, and then search results of different entities are provided for the user.
In this embodiment, when it is detected that the user clicks the triggerable control in the image to be processed in S103, a preset animation effect may be loaded to select the entity in a frame, so as to update the view effect of the image to be processed, for example, when the user clicks one triggerable control to select a corresponding entity in a frame, and when it is detected that the user clicks another triggerable control, the selected entity may be changed by loading the preset animation effect.
In this embodiment, when S103 is executed to select an entity corresponding to the triggerable control clicked by the user, the selectable implementation manner that may be adopted is as follows: setting a covering layer on an image to be processed; and performing frame selection on the entity corresponding to the triggerable control clicked by the user on the set Mongolia layer.
In this embodiment, when the S103 is executed to frame the entity corresponding to the triggerable control clicked by the user on the set masking layer, the masking layer located at the framed entity may be removed, or a highlight layer may be further set on the masking layer located at the framed entity, so as to highlight the framed entity.
In an actual use scenario, when an entity is framed, problems of erroneous framing, inappropriate size of a frame for framing the entity, and the like may exist, so that the accuracy of the obtained search result is low. In order to improve the accuracy of the search result, after executing S103 to select the entity corresponding to the clicked triggerable control, the embodiment may further include the following: the box of the box-selected entity is adjusted according to the operation of the user, for example, the position of the box can be moved according to the gesture direction of the user in the screen, and the position and size of the box can also be changed according to the dragging of the user to the four corners of the box.
That is to say, the embodiment may also implement the block selection of the entity in combination with the operation of the user, so as to further improve the accuracy in the block selection of the entity, thereby correspondingly improving the accuracy of the search result provided to the user.
When S103 is executed to provide the search result of the framed entity to the user, the embodiment may search the framed entity in an image search manner to obtain a search result; the outlined entity may also be searched for a search result based on the image search in combination with the recognition result of the outlined entity.
In addition, in this embodiment, when it is detected that the user clicks the displayed triggerable control in step S103, and when the entity corresponding to the clicked triggerable control is selected, an optional implementation manner that may be adopted is: receiving clicks of a plurality of triggerable controls by a user; and respectively performing frame selection on a plurality of entities corresponding to the clicked triggerable control.
That is to say, in the embodiment, in a single search process, in addition to one entity being framed and selected by one click of the user, a plurality of entities can be framed and selected respectively by multiple clicks of the user, so that the flexibility of the user in image search is further improved.
After the multiple entities are framed according to multiple clicks of the user, in the embodiment, when the search result of the framed entities is provided to the user in S103, the optional implementation manner that may be adopted is: searching according to the framed entities, namely taking the framed entities as a searching condition; providing search results corresponding to the framed plurality of entities to the user. That is, in this embodiment, a plurality of entities can be used for searching in one searching process, so that the accuracy of the obtained search result is further improved.
For example, if the present embodiment performs S103 to frame "zhang san" and "lie si" in the image to be processed, the present embodiment may perform a search using an entity corresponding to "zhang san" and an entity corresponding to "lie si" in the image to be processed, so as to provide a search result related to both "zhang san" and "lie si" to the user.
According to the method provided by the embodiment, after the entity identification is carried out on the image to be processed, the entity in the image to be processed is firstly displayed in the form of the triggerable control, then the entity corresponding to the clicked triggerable control is selected in the image to be processed in a frame mode according to the click of the user on the displayed triggerable control, and the search result of the selected entity is provided for the user, so that different search requirements of the user during image search can be met, the image search steps are simplified, and the image search accuracy is improved.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure. As shown in fig. 2, the search apparatus 200 of the present embodiment includes:
an acquisition unit 201, configured to acquire an image to be processed;
the processing unit 202 is configured to perform entity identification on the image to be processed, and display a triggerable control corresponding to an identified entity in the image to be processed;
the searching unit 203 is configured to, when it is detected that the user clicks the displayed triggerable control, frame and select an entity corresponding to the clicked triggerable control, and provide a search result of the framed entity to the user.
The acquisition unit 201 may take an image input by a user as an image to be processed when acquiring the image to be processed, or may take an image selected by the user when browsing a page as the image to be processed. The image to be processed acquired by the acquiring unit 201 may include one or more entities, and the types of the entities included in the image to be processed may be a human face, an article, a building, and the like.
In the embodiment, after the to-be-processed image is acquired by the acquisition unit 201, the processing unit 202 performs entity identification on the acquired to-be-processed image, and displays a triggerable control corresponding to the identified entity in the to-be-processed image.
The triggerable controls displayed by the processing unit 202 in the image to be processed may be clickable buttons, the displayed triggerable controls may be clickable by a user, and the number of the displayed triggerable controls is the same as the number of entities included in the image to be processed.
Specifically, when the processing unit 202 performs entity identification on the image to be processed and displays a triggerable control corresponding to the identified entity in the image to be processed, the selectable implementation manner that can be adopted is as follows: carrying out entity identification on the image to be processed to obtain the coordinates of an entity in the image to be processed; determining the central point of the entity according to the coordinate of the entity; and displaying a triggerable control corresponding to the entity at the determined center point.
In addition, when the processing unit 202 displays the triggerable control corresponding to the identified entity in the image to be processed, the following may be included: acquiring an identification result of an entity; and displaying the recognition result of the entity and the triggerable control corresponding to the entity in the image to be processed.
In this embodiment, after the processing unit 202 displays the triggerable control corresponding to the identified entity in the image to be processed, the searching unit 203 selects the entity corresponding to the clicked triggerable control in a frame mode when it is detected that the user clicks the displayed triggerable control, and provides a search result of the selected entity to the user.
That is to say, the searching unit 203 selects different entities in the image to be processed according to the click of the triggerable control in the image to be processed by the user; if the image to be processed contains a plurality of triggerable controls, the framed entity in the image to be processed can be correspondingly changed when the user clicks different triggerable controls, and then search results of different entities are provided for the user.
When detecting that the user clicks the triggerable control in the image to be processed, the searching unit 203 may load a preset animation effect to select an entity in a frame, so as to update the view effect of the image to be processed.
When the searching unit 203 selects the entity corresponding to the triggerable control clicked by the user, the selectable implementation manner that can be adopted is as follows: setting a covering layer on an image to be processed; and performing frame selection on the entity corresponding to the triggerable control clicked by the user on the set Mongolia layer.
When the searching unit 203 selects the entity corresponding to the triggerable control clicked by the user on the set masking layer, the masking layer located at the framed entity may be removed, or a highlight layer may be further set on the masking layer located at the framed entity, so as to highlight the framed entity.
In an actual use scenario, when an entity is subjected to frame selection, problems of error frame selection, inappropriate frame size of the frame for frame selection of the entity and the like may exist, so that the accuracy of the obtained search result is low. In order to improve the accuracy of the search result, after the entity corresponding to the clicked triggerable control is selected by the search unit 203, the following contents may be further included: and adjusting the boxes of the selected entities according to the operation of the user.
That is to say, the search unit 203 may also implement the block selection of the entity in combination with the operation of the user, so as to further improve the accuracy in block selection of the entity, and accordingly improve the accuracy of the search result provided to the user.
In addition, when providing the search result of the framed entity to the user, the search unit 203 may search the framed entity in an image search manner to obtain a search result; the outlined entity may also be searched for a search result based on the image search in combination with the recognition result of the outlined entity.
In addition, when the searching unit 203 detects that the user clicks the displayed triggerable control, and selects the entity corresponding to the clicked triggerable control in a frame mode, the selectable implementation manner that can be adopted is as follows: receiving clicks of a plurality of triggerable controls by a user; and respectively performing frame selection on a plurality of entities corresponding to the clicked triggerable control.
That is to say, in the process of one search, the searching unit 203 may select one entity according to one click of the user, and may select a plurality of entities according to multiple clicks of the user, so as to further improve the flexibility of the user in performing the image search.
After the multiple entities are framed according to multiple clicks of the user, when the search unit 203 provides the search result of the framed entities to the user, the optional implementation manners that can be adopted are: searching according to the selected multiple entities; providing search results corresponding to the framed plurality of entities to the user. That is, the searching unit 203 can perform a search using a plurality of entities in one search process, thereby further improving the accuracy of the obtained search result.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
As shown in fig. 3, is a block diagram of an electronic device of a search method according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 3, the apparatus 300 includes a computing unit 301 that can perform various appropriate actions and processes according to a computer program stored in a read-only memory (ROM)302 or a computer program loaded from a storage unit 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data required for the operation of the device 300 can also be stored. The calculation unit 301, the ROM302, and the RAM303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Various components in device 300 are connected to I/O interface 305, including: an input unit 306 such as a keyboard, a mouse, or the like; an output unit 305 such as various types of displays, speakers, and the like; a storage unit 308 such as a magnetic disk, optical disk, or the like; and a communication unit 309 such as a network card, modem, wireless communication transceiver, and the like. The communication unit 309 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, computing units running various machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 301 executes the respective methods and processes described above, such as the search method. For example, in some embodiments, the search method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 308.
In some embodiments, part or all of the computer program may be loaded and/or installed onto device 300 via ROM302 and/or communication unit 309. When the computer program is loaded into RAM303 and executed by the computing unit 301, one or more steps of the search method described above may be performed. Alternatively, in other embodiments, the computing unit 301 may be configured to perform the search method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A search method, comprising:
acquiring an image to be processed;
carrying out entity identification on the image to be processed, and displaying a triggerable control corresponding to the entity obtained by identification in the image to be processed;
and under the condition that the user clicks the displayed triggerable control, the entity corresponding to the clicked triggerable control is selected in a frame mode, and the search result of the selected entity is provided for the user.
2. The method of claim 1, wherein the identifying the entity of the image to be processed, and the displaying the triggerable control corresponding to the identified entity in the image to be processed comprises:
carrying out entity identification on the image to be processed to obtain the coordinates of an entity in the image to be processed;
determining the central point of the entity according to the obtained coordinates of the entity;
and displaying a triggerable control corresponding to the entity at the determined center point.
3. The method of claim 1, wherein the displaying a triggerable control in the image to be processed corresponding to the identified entity comprises:
acquiring an identification result of an entity;
and displaying the recognition result of the entity and the triggerable control corresponding to the entity in the image to be processed.
4. The method of claim 1, wherein the framing the entity corresponding to the clicked triggerable control comprises:
setting a masking layer on the image to be processed;
and carrying out frame selection on the entity corresponding to the clicked triggerable control on the cover layer.
5. The method of claim 1, further comprising,
and after the entity corresponding to the clicked triggerable control is selected, adjusting the box of the selected entity according to the operation of the user.
6. The method of claim 1, wherein, in the event that the user is detected to click on the displayed triggerable control, the framing an entity corresponding to the clicked triggerable control comprises:
receiving clicks of a plurality of triggerable controls by a user;
and respectively performing frame selection on a plurality of entities corresponding to the clicked triggerable control.
7. The method of claim 6, wherein the providing search results for the framed entity to the user comprises:
searching according to the selected multiple entities;
providing search results corresponding to the framed plurality of entities to the user.
8. A search apparatus, comprising:
the acquisition unit is used for acquiring an image to be processed;
the processing unit is used for carrying out entity identification on the image to be processed and displaying a triggerable control corresponding to the identified entity in the image to be processed;
and the searching unit is used for selecting the entity corresponding to the clicked triggerable control in a frame mode and providing the searching result of the selected entity for the user under the condition that the user is detected to click the displayed triggerable control.
9. The apparatus according to claim 8, wherein the processing unit, when performing entity identification on the image to be processed and displaying a triggerable control corresponding to the identified entity in the image to be processed, specifically performs:
carrying out entity identification on the image to be processed to obtain the coordinates of an entity in the image to be processed;
determining the central point of the entity according to the obtained coordinates of the entity;
and displaying a triggerable control corresponding to the entity at the determined center point.
10. The apparatus according to claim 8, wherein when the processing unit displays the triggerable control corresponding to the identified entity in the image to be processed, the processing unit specifically performs:
acquiring an identification result of an entity;
and displaying the recognition result of the entity and the triggerable control corresponding to the entity in the image to be processed.
11. The apparatus according to claim 8, wherein the search unit, when framing the entity corresponding to the clicked triggerable control, specifically performs:
setting a masking layer on the image to be processed;
and carrying out frame selection on the entity corresponding to the clicked triggerable control on the cover layer.
12. The apparatus of claim 8, the search unit further to execute after the selection of the entity corresponding to the clicked triggerable control,
and adjusting the boxes of the selected entities according to the operation of the user.
13. The apparatus according to claim 8, wherein the search unit, when detecting that the user clicks the displayed triggerable control, and when framing an entity corresponding to the clicked triggerable control, specifically performs:
receiving clicks of a plurality of triggerable controls by a user;
and respectively performing frame selection on a plurality of entities corresponding to the clicked triggerable control.
14. The apparatus according to claim 13, wherein the search unit, when providing the search result of the framed entity to the user, specifically performs:
searching according to the selected multiple entities;
providing search results corresponding to the framed plurality of entities to the user.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202110536185.8A 2021-05-17 2021-05-17 Searching method, searching device, electronic equipment and readable storage medium Pending CN113343005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110536185.8A CN113343005A (en) 2021-05-17 2021-05-17 Searching method, searching device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110536185.8A CN113343005A (en) 2021-05-17 2021-05-17 Searching method, searching device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113343005A true CN113343005A (en) 2021-09-03

Family

ID=77470579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110536185.8A Pending CN113343005A (en) 2021-05-17 2021-05-17 Searching method, searching device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113343005A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140282660A1 (en) * 2013-03-14 2014-09-18 Ant Oztaskent Methods, systems, and media for presenting mobile content corresponding to media content
CN110704684A (en) * 2019-10-17 2020-01-17 北京字节跳动网络技术有限公司 Video searching method and device, terminal and storage medium
CN110909192A (en) * 2019-11-20 2020-03-24 腾讯科技(深圳)有限公司 Instant searching method, device, terminal and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140282660A1 (en) * 2013-03-14 2014-09-18 Ant Oztaskent Methods, systems, and media for presenting mobile content corresponding to media content
CN110704684A (en) * 2019-10-17 2020-01-17 北京字节跳动网络技术有限公司 Video searching method and device, terminal and storage medium
CN110909192A (en) * 2019-11-20 2020-03-24 腾讯科技(深圳)有限公司 Instant searching method, device, terminal and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王汝言;刘宇哲;张普宁;亢旭源;李学芳;: "面向物联网的边云协同实体搜索方法", 计算机工程, no. 08 *

Similar Documents

Publication Publication Date Title
US9285969B2 (en) User interface navigation utilizing pressure-sensitive touch
CN113342345A (en) Operator fusion method and device of deep learning framework
CN112597754A (en) Text error correction method and device, electronic equipment and readable storage medium
CN114120414B (en) Image processing method, image processing apparatus, electronic device, and medium
CN113780098A (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN112784588B (en) Method, device, equipment and storage medium for labeling text
CN113837194A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN112528995A (en) Method for training target detection model, target detection method and device
CN115454971A (en) Data migration method and device, electronic equipment and storage medium
CN113343005A (en) Searching method, searching device, electronic equipment and readable storage medium
CN113656533A (en) Tree control processing method and device and electronic equipment
CN113126866A (en) Object determination method and device, electronic equipment and storage medium
CN112861504A (en) Text interaction method, device, equipment, storage medium and program product
CN116071422B (en) Method and device for adjusting brightness of virtual equipment facing meta-universe scene
CN114296609B (en) Interface processing method and device, electronic equipment and storage medium
CN114879889B (en) Processing method, processing device, revocation system, electronic equipment and storage medium
CN112765975B (en) Word segmentation disambiguation processing method, device, equipment and medium
CN113360074B (en) Soft keyboard display method, related device and computer program product
CN114564133A (en) Application program display method, device, equipment and medium
CN113961775A (en) Data visualization method and device, electronic equipment and readable storage medium
CN114416040A (en) Page construction method, device, equipment and storage medium
CN114036392A (en) Page processing method, training method, device, electronic equipment and storage medium
CN113407745A (en) Data annotation method and device, electronic equipment and computer readable storage medium
CN114401337A (en) Data sharing method, device and equipment based on cloud mobile phone and storage medium
CN114494950A (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination