CN117290537A - Image searching method, device, equipment and storage medium - Google Patents

Image searching method, device, equipment and storage medium Download PDF

Info

Publication number
CN117290537A
CN117290537A CN202311281761.4A CN202311281761A CN117290537A CN 117290537 A CN117290537 A CN 117290537A CN 202311281761 A CN202311281761 A CN 202311281761A CN 117290537 A CN117290537 A CN 117290537A
Authority
CN
China
Prior art keywords
contour
current
confidence coefficient
confidence
loaded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311281761.4A
Other languages
Chinese (zh)
Other versions
CN117290537B (en
Inventor
李嘉昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311281761.4A priority Critical patent/CN117290537B/en
Publication of CN117290537A publication Critical patent/CN117290537A/en
Application granted granted Critical
Publication of CN117290537B publication Critical patent/CN117290537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image searching method, device, equipment and storage medium based on artificial intelligence technology, wherein the method comprises the following steps: after determining a contour group to be loaded in a target image, acquiring a corresponding contour group and loading the corresponding contour group to a contour searching unit; the currently loaded profile group comprises a first profile block and second profile blocks corresponding to N movement directions; invoking a contour searching unit to search contour points in the currently loaded contour group; determining the current motion direction of the contour and predicting the confidence coefficient of the current motion direction at the current moment when one contour point is searched; if the current motion direction falls into the prediction areas where the N second contour blocks are located and the corresponding confidence coefficient is greater than or equal to a threshold value, determining a contour group to be loaded in the target image; iteratively executing the process until a search ending event is detected, and determining the contour of the target image according to each contour point searched by the contour searching unit; the occupation of the storage space of the contour searching unit can be reduced and the storage resource can be saved.

Description

Image searching method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image searching method, apparatus, device, and storage medium.
Background
With the development of computer technology, an image search technique, which is a technique of searching out the outline of an object within an image from the image, has been proposed. In the existing image searching method, an image to be subjected to image searching is generally loaded in a contour searching unit, so that the contour searching unit is called to perform contour point searching in the loaded image to obtain the contour of an object in the corresponding image; it can be seen that in this way, the image search is performed by loading the whole image in the contour search unit, which makes the contour search unit consume a lot of memory space, and especially in the case of high image resolution, the problem of excessive memory space occupation is particularly pronounced.
Disclosure of Invention
The embodiment of the application provides an image searching method, device, equipment and storage medium, which can reduce the occupation of the storage space of a contour searching unit and save storage resources.
In one aspect, an embodiment of the present application provides an image searching method, including:
after determining a contour group to be loaded in a target image, acquiring a corresponding contour group and loading the corresponding contour group to a contour searching unit so as to refresh the storage space of the contour searching unit; wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from the target image;
Invoking the contour searching unit to perform contour point searching from the first contour block in the currently loaded contour group based on the N motion directions;
determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; predicting the confidence coefficient of the current motion direction at the current moment; any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion;
if the current movement direction falls into the prediction area and the confidence coefficient of the current movement direction at the current moment is greater than or equal to a threshold value, determining a contour group to be loaded in the target image;
and iteratively executing the process until the ending search event of the target image is detected, and determining the contour of the target image according to each contour point searched by the contour search unit.
In another aspect, an embodiment of the present application provides an image searching apparatus, including:
the acquisition unit is used for acquiring the corresponding profile groups to be loaded to the profile searching unit after the profile groups to be loaded are determined in the target image, so as to refresh the storage space of the profile searching unit; wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from the target image;
The processing unit is used for calling the contour searching unit to search contour points based on the N moving directions from the first contour block in the currently loaded contour group;
the processing unit is further used for determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; predicting the confidence coefficient of the current motion direction at the current moment; any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion;
the processing unit is further configured to determine a profile group to be loaded in the target image if the current motion direction falls within the prediction area and the confidence level of the current motion direction at the current moment is greater than or equal to a threshold value;
and iteratively executing the process until the processing unit detects the search ending event of the target image, and determining the contour of the target image according to each contour point searched by the contour searching unit.
In yet another aspect, embodiments of the present application provide a computer device including an input interface and an output interface, the computer device further including:
A processor and a computer storage medium;
wherein the processor is adapted to implement one or more instructions and the computer storage medium stores one or more instructions adapted to be loaded by the processor and to perform the above-mentioned image search method.
In yet another aspect, embodiments of the present application provide a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the above-mentioned image search method.
In yet another aspect, embodiments of the present application provide a computer program product comprising a computer program stored in a computer storage medium; the processor of the computer device reads the computer program from the computer storage medium, and the processor executes the computer program to cause the computer device to execute the above-described image search method.
In the embodiment of the application, in the process of searching to obtain the contour of the target image, the target image is not directly loaded in the contour searching unit, but a contour group formed by a plurality of image blocks divided from the target image is loaded, and in the process of calling the contour searching unit to search contour points in the currently loaded contour group, whether the contour searching unit needs to load a new contour group or not is judged in real time every time when one contour point is searched, so that the target image can be continuously loaded to the contour searching unit in a contour group mode to refresh the storage space of the contour searching unit and search until the contour of the target image is obtained; in the process of searching and obtaining the contour of the target image based on the contour searching unit, only one contour group-sized storage space is occupied in the storage space of the contour searching unit all the time, and compared with the process of loading the image in the contour searching unit, the occupation of the storage space of the contour searching unit can be reduced, and the storage resources of the contour searching unit can be saved.
In addition, in the process of judging whether a new profile group needs to be loaded by the profile searching unit or not when one profile point is searched, the probability of continuously searching the profile point along the current motion direction can be predicted based on the current motion direction of the profile, whether the new profile group needs to be loaded or not can be accurately judged in real time according to the predicted probability of continuously searching the profile point along the current motion direction, and when the new profile group needs to be loaded is judged, the profile group to be loaded is determined, and then the profile group to be loaded can be prefetched, the storage time required by loading the profile group is hidden, and the searching overall performance is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a schematic diagram of a 4-neighborhood structure according to an embodiment of the present application;
FIG. 1b is a schematic diagram of an 8-neighborhood structure according to an embodiment of the present disclosure;
FIG. 1c is a schematic diagram of a D-neighborhood structure according to an embodiment of the present application;
FIG. 1d is a schematic diagram of an image area corresponding to a currently loaded profile group according to an embodiment of the present application;
fig. 1e is a schematic diagram of interaction between a terminal and a server according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an image searching method according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a current direction of motion for determining contours provided by embodiments of the present application;
FIG. 4 is a flow chart of another image searching method according to another embodiment of the present application;
FIG. 5a is a schematic illustration of determining a profile group to be loaded provided in an embodiment of the present application;
FIG. 5b is a schematic illustration of another determination of a profile group to be loaded provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image searching apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another image searching apparatus provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Embodiments of the present application relate to the field of artificial intelligence (Artificial Intelligence, AI). Wherein artificial intelligence is the intelligence of simulating, extending and expanding a person using a digital computer or a machine controlled by a digital computer, sensing the environment, obtaining knowledge, and using knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include, for example, sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, pre-training model technologies, operation/interaction systems, mechatronics, and the like. The pre-training model is also called a large model and a basic model, and can be widely applied to all large-direction downstream tasks of artificial intelligence after fine adjustment. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Embodiments of the present application relate generally to Computer Vision technology (CV) in the field of artificial intelligence. The computer vision is a science for researching how to make a machine "see", and more specifically, a camera and a computer are used to replace human eyes to identify and measure targets and perform graphic processing, so that the computer is processed into images more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. The large model technology brings important reform for the development of computer vision technology, and a swin-transducer (a self-focusing neural network), viT (vision transformer, a vision model), V-MOE (a sparse hybrid model), MAE (a self-coding model) and other vision fields can be quickly and widely applied to specific downstream tasks through fine tuning. The computer vision technology generally comprises technologies such as image processing, image recognition, image semantic understanding, image retrieval, optical Character Recognition (OCR), video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D (3D) technology, virtual reality, augmented reality, synchronous positioning, map construction and the like, and common biological feature recognition technologies such as face recognition, fingerprint recognition and the like.
The embodiment of the application provides an image searching scheme based on the computer vision technology, which can reduce the occupation of the storage space of the contour searching unit and save the storage resource of the contour searching unit on the basis of searching to obtain the image contour (namely the contour of an object in an image); wherein, the outline of the object in the image refers to: a curve formed by pixel points of the edge (or boundary) of the object in the image; the image search scheme may generally include the following process: after determining a contour group to be loaded in a target image, acquiring a corresponding contour group and loading the corresponding contour group to a contour searching unit so as to refresh the storage space of the contour searching unit; invoking a contour searching unit to search contour points in a currently loaded contour group, wherein the currently loaded contour group comprises a plurality of image blocks divided from a target image; judging whether the contour searching unit needs to load a new contour group or not when searching one contour point, and determining the contour group to be loaded in the target image under the condition that the contour searching unit needs to load the new contour group; and iteratively executing the process until the ending search event of the target image is detected, and determining the contour of the target image according to each contour point searched by the contour search unit.
The target image may be an image to be subjected to image searching, for example, may be an image in a picture format, an image in a video frame format, or the like; the contour searching unit may be configured to perform a related processing logic of the image search, for example, may be configured to perform a contour point search in the contour group to obtain contour points, i.e. pixel points in the image that may be used to represent the contour. Refreshing the storage space of the contour searching unit means that the contour group loaded at present replaces the contour group loaded in the storage space of the contour searching unit last time, so that only one storage space with the contour group size is occupied in the storage space of the contour searching unit all the time in the process of searching the contour of the target image based on the contour searching unit, and compared with the process of loading the image in the contour searching unit, the occupation of the storage space of the contour searching unit can be reduced and the storage resource of the contour searching unit can be saved; in addition, in the process of judging whether a new profile group needs to be loaded by the profile searching unit or not when one profile point is searched, the probability of continuously searching the profile point along the current motion direction can be predicted based on the current motion direction of the profile, whether the new profile group needs to be loaded or not can be accurately judged in real time according to the predicted probability of continuously searching the profile point along the current motion direction, and when the new profile group needs to be loaded is judged, the profile group to be loaded is determined, and then the profile group to be loaded can be prefetched, the storage time required by loading the profile group is hidden, and the searching overall performance is improved.
Wherein the currently loaded profile groups may comprise: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from a target image.
Taking pixel points as an example, the pixel points can be generally divided into a 4 neighborhood, an 8 neighborhood, a D neighborhood and the like according to different adjacent relations among the pixel points. Referring to fig. 1a, a schematic structural diagram of a 4-neighborhood provided in the embodiment of the present application uses four pixels having an adjacent relationship with a pixel P, i.e. right (e.g. reference numeral 0), upper (e.g. reference numeral 1), left (e.g. reference numeral 2), lower (e.g. reference numeral 3), as 4 regions of the pixel P; one neighborhood corresponds to one motion direction, 4 neighborhood corresponds to 4 motion directions, and the 1 st to 4 th motion directions are respectively: right, upper, left, lower 4 directions of motion. Referring to fig. 1b, a schematic structural diagram of an 8-neighborhood provided in this embodiment of the present application, wherein 8 pixels having an adjacent relationship with the pixel point P (e.g. 0), an upper right (e.g. 1), an upper left (e.g. 2), an upper left (e.g. 3), a lower left (e.g. 4), a lower left (e.g. 5), a lower (e.g. 6), a lower right (e.g. 7) are used as 8 regions of the pixel point P; one neighborhood corresponds to one motion direction, 8 neighborhood corresponds to 8 motion directions, and the 1 st to 8 th motion directions are respectively: right, upper left, lower right 8 directions of motion. Referring to fig. 1c, a schematic structural diagram of a D-neighborhood according to an embodiment of the present application is provided, the D field of the pixel point P is formed by using 4 pixel points with an adjacent relation of the pixel point P, namely an upper right pixel point (such as a reference numeral 1), an upper left pixel point (such as a reference numeral 3), a lower left pixel point (such as a reference numeral 5) and a lower right pixel point (such as a reference numeral 7); one neighborhood corresponds to one motion direction, the D neighborhood corresponds to 4 motion directions, and the 1 st to 4 th motion directions are respectively: upper right, upper left, lower right 4 directions of motion. It should be noted that, the image searching method provided by the embodiment of the application has wider universality and can be applied to contour searching of any neighborhood including but not limited to 4 neighborhood, 8 neighborhood, D neighborhood and the like; the complexity can be flexibly adapted according to practical applications, the specific neighborhood involved is not limited in the embodiment of the present application, and for convenience of explanation, the embodiment of the present application is described by taking 8 neighbors as examples.
Referring to fig. 1d, a schematic diagram of an image area corresponding to a currently loaded contour set is shown, wherein a first contour block may be shown as a 100 mark, and coordinates of a central point of the first contour block may be shown as a 101 mark, and 8 second contour blocks located in 8 adjacent areas of the first contour block correspond to 8 movement directions of right, upper left, lower right respectively; the area where 8 second contour blocks are located is a prediction area, and the area where the first contour blocks are located, as shown by a 100 mark, can be referred to as a calculation area in an adaptive manner; the size of the contour block may be configured according to specific requirements, and the embodiments of the present application are not limited, for example, the corresponding requirements indicate: with m×n TILE (TILE is defined as the minimum image block, resolution is k×k) to form a contour block, the size of the adaptive contour block may be (m×k) (n×k), that is, the length of the contour block is (m×k), and the width is (n×k), where m, n, k may be configured according to specific requirements.
Based on the above description of the currently loaded profile groups, the above image search scheme is described in more detail below: after determining a contour group to be loaded in a target image, acquiring a corresponding contour group and loading the corresponding contour group to a contour searching unit so as to refresh the storage space of the contour searching unit; invoking a contour searching unit to search contour points based on N motion directions from a first contour block in a currently loaded contour group; determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; and predicting the confidence coefficient of the current motion direction at the current moment, wherein any confidence coefficient is used for indicating: continuing to search for probabilities of contour points along the respective directions of motion; if the current movement direction falls into the preset area and the confidence coefficient of the current movement direction at the current moment is greater than or equal to a threshold value, determining a contour group to be loaded in the target image; iteratively executing the process until the ending search event of the target image is detected, and determining the contour of the target image according to each contour point searched by the contour search unit; that is, if the current movement direction falls within the predicted area and the confidence of the current movement direction at the current time is greater than or equal to the threshold, the contour searching unit is considered to be required to load a new contour group.
In a specific implementation, the above mentioned image search scheme may be performed by a computer device, which may be a terminal or a server, i.e. the above mentioned image search scheme may be performed by a terminal or a server. Alternatively, the above-mentioned image search scheme may be performed by the terminal and the server together. Taking the example that the contour searching unit is deployed in the server, the terminal device may acquire a corresponding contour group after determining the contour group to be loaded in the target image, and send the acquired contour group to the server. The server loads the received profile groups to the profile searching unit so as to refresh the storage space of the profile searching unit; invoking a contour searching unit to search contour points based on N motion directions from a first contour block in a currently loaded contour group; determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; predicting the confidence coefficient of the current motion direction at the current moment; and if the current movement direction falls into the preset area and the confidence coefficient of the current movement direction at the current moment is greater than or equal to a threshold value, requesting the terminal to send a new profile group. The terminal responds to the request of the server, and can determine the profile groups to be loaded in the target image so as to acquire corresponding profile groups and send the acquired profile groups to the server; iteratively executing the process until the server detects the ending search event of the target image, determining the contour of the target image according to each contour point searched by the contour search unit, and returning the contour of the target image to the terminal; this process may be illustrated by fig. 1 e.
The above-mentioned terminal may be a smart phone, a computer (such as a tablet computer, a notebook computer, a desktop computer, etc.), an intelligent wearable device (such as a smart watch, a smart glasses), an intelligent voice interaction device, an intelligent home appliance (such as a smart television), a vehicle-mounted terminal, an aircraft, etc. In addition, the server mentioned above may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), and basic cloud computing services such as big data and artificial intelligence platforms, and so on. Further, the terminal and the server may be located within or outside the blockchain network, which is not limited; furthermore, the terminal and the server can upload any data stored in the terminal and the server to the blockchain network for storage, so that the data stored in the terminal and the server are prevented from being tampered, and the data security is improved.
In the application, the collection and processing of related data (such as profile groups and the like) should be strictly according to the requirements of laws and regulations when the application is performed, the informed consent or independent consent of the personal information body is obtained, and the subsequent data use and processing behaviors are developed within the authorized range of laws and regulations and the personal information body.
Based on the above description, the embodiments of the present application provide an image searching method, referring to fig. 2, which is a schematic flow chart of an image searching method provided in the embodiments of the present application, and the image searching method is implemented by a computer device as an example, and the image searching method may include the following steps S201 to S205:
s201, after the contour group to be loaded is determined in the target image, the corresponding contour group is acquired and loaded to the contour searching unit so as to refresh the storage space of the contour searching unit.
Wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from a target image; the embodiment of the application is described by taking an 8-neighborhood as an example, one neighborhood corresponds to one motion direction, 8 motion directions exist in the 8-neighborhood, and the 1 st to 8 th motion directions are respectively: right, upper left, lower right 8 directions of motion. The method comprises the steps that a storage space of a contour searching unit is refreshed, namely, a contour group loaded at present replaces a contour group loaded in the storage space of the contour searching unit last time, so that only one storage space with the contour group size in the storage space of the contour searching unit is occupied all the time in the process of searching and obtaining the contour of a target image based on the contour searching unit, and compared with the process of loading the image in the contour searching unit, the occupation of the storage space of the contour searching unit can be reduced, and storage resources of the contour searching unit can be saved.
In a scene where the computer equipment needs to search to obtain the outline of the target image; in a possible implementation manner, the computer device may acquire the target image from the data source providing the target image, store the target image in another storage space different from the storage space of the contour searching unit, acquire the corresponding contour group from the another storage space after determining the contour group to be loaded each time, and load the contour group into the contour searching unit so as to refresh the storage space of the contour searching unit; alternatively, the other storage space may be a local storage space of the computer device, or may be a remote storage space located outside the computer device, which may be selected according to specific requirements, which is not limited in the embodiments of the present application. In another possible implementation, the computer device may not acquire the target image, but acquire, after each determination of the profile group to be loaded, the corresponding profile group from the data source that provides the target image, and load the profile group to the profile search unit, so as to refresh the storage space of the profile search unit; compared with the mode that the computer equipment acquires the target image and stores the target image in the local storage space, the method can also reduce the occupation of the local storage space of the computer equipment and save the local storage resource of the computer equipment.
S202, calling a contour searching unit to search contour points based on N motion directions from a first contour block in the currently loaded contour group.
In a possible implementation manner, in the process that the computer device invokes the contour searching unit to search for contour points based on the N motion directions, the currently searched contour points may be searched based on the contour points searched last time and the N motion directions; that is, when searching for the previous time, the searching may be performed among the N adjacent pixel points in the motion direction of the contour point searched for the previous time, that is, the searching may be performed among the N adjacent regions of the contour point searched for the previous time, so as to search for the contour point. It should be noted that, if the currently loaded contour set is the contour set loaded to the contour searching unit first in the target image, in the process of searching contour points based on N moving directions from the first contour block in the currently loaded contour set, it is necessary to determine the starting contour point from the first contour block first, and then the starting contour point may be used as the contour point searched last time to continue the contour point search. The contour point searching process has more mature implementation modes, and can be selected according to specific requirements, and the embodiment of the application is not limited.
S203, determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; and predicting the confidence of the current motion direction at the current moment.
Wherein, the current movement direction refers to: the contour point searched last time points to the direction of the contour point searched currently, and the current movement direction is one of N movement directions; referring to fig. 3, for a schematic diagram for determining a current moving direction of a contour according to an embodiment of the present application, if a contour of a target image is shown in a shaded portion in fig. 3, if a currently searched contour point is shown in a 301 mark, a last searched contour point is shown in a 302 mark, a contour searched by ending the current search may be shown in a 303 mark, and from the last searched contour point shown in the 302 mark, a direction pointing to the currently searched contour point shown in the 301 mark is "down", and then the current moving direction of the contour is "down".
Wherein, any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion; the greater the probability of continuing to search for contour points along a certain direction of motion, the greater the probability of continuing to search for contour points along that direction of motion, and also the greater the probability of a contour moving after a certain contour point toward that direction of motion. The confidence of the current motion direction at the current moment is used for indicating: at the next moment of the current moment (i.e. the moment of the next search), the probability of continuing to search for contour points along the current movement direction, for example, the probability of continuing to search for contour points along the "lower" movement direction at the next moment of the current moment, i.e. the probability of continuing to search for contour points along the "lower" movement direction, may indicate the next search, and the probability of searching for contour points along the "lower" movement direction.
S204, if the current movement direction falls into the preset area and the confidence coefficient of the current movement direction at the current moment is greater than or equal to a threshold value, determining a contour group to be loaded in the target image.
In one possible implementation, the computer device may determine whether the current motion direction falls within the predicted area based on the currently searched contour point, determine that the current motion direction falls within the predicted area when the currently searched contour point falls within the predicted area, and otherwise determine that the current motion direction does not fall within the predicted area. When the confidence coefficient of the current motion direction at the current moment is greater than or equal to a threshold value, the predicted contour point continues to be searched along the current motion direction at the next moment, and the predicted contour continues to move towards the current motion direction after the currently searched contour point is also indicated; since the searching is started from the first contour block in the contour point searching process, the contour point is predicted to fall into the prediction area in the current motion direction, and the contour point is predicted to continue searching along the current motion direction at the next moment, the searching trend of the predicted contour point is predicted to search towards the trend far away from the first contour block, that is, the searching in the first contour block of the predicted contour group which is currently loaded is ended, so that the contour group to be loaded can be redetermined to acquire the corresponding contour group to be loaded to the contour searching unit, and the contour point can be continuously searched from the first contour block of the newly loaded contour group.
S205, iteratively executing the process until the end search event of the target image is detected, and determining the contour of the target image according to each contour point searched by the contour search unit.
The search event of ending the target image may be: an event that may indicate ending the search for the target image; for example, the end search event may be: the contour point searched at present is the same contour point with the initial contour point, the contour point search starts from the initial contour point and returns to the initial contour point, the computer equipment can finish searching the target image under the condition that the contour point searched at present is detected to be the same contour point with the initial contour point, and the contour of the target image is determined according to each contour point searched by the contour searching unit; as another example, the end search event may be: when the next searching is carried out based on the currently searched contour points, no contour points are searched; the ending judgment of the contour point searching process has more mature implementation modes, and the contour point searching process can be selected according to specific requirements, and the embodiment of the application is not limited.
In the embodiment of the application, in the process of searching to obtain the contour of the target image, the target image is not directly loaded in the contour searching unit, but a contour group formed by a plurality of image blocks divided from the target image is loaded, and in the process of calling the contour searching unit to search contour points in the currently loaded contour group, whether the contour searching unit needs to load a new contour group or not is judged in real time every time when one contour point is searched, so that the target image can be continuously loaded to the contour searching unit in a contour group mode to refresh the storage space of the contour searching unit and search until the contour of the target image is obtained; in the process of searching and obtaining the contour of the target image based on the contour searching unit, only one contour group-sized storage space is occupied in the storage space of the contour searching unit all the time, and compared with the process of loading the image in the contour searching unit, the occupation of the storage space of the contour searching unit can be reduced, and the storage resources of the contour searching unit can be saved.
In addition, in the process of judging whether a new profile group needs to be loaded by the profile searching unit or not when one profile point is searched, the probability of continuously searching the profile point along the current motion direction can be predicted based on the current motion direction of the profile, whether the new profile group needs to be loaded or not can be accurately judged in real time according to the predicted probability of continuously searching the profile point along the current motion direction, and when the new profile group needs to be loaded is judged, the profile group to be loaded is determined, and then the profile group to be loaded can be prefetched, the storage time required by loading the profile group is hidden, and the searching overall performance is improved.
Based on the above description, the present embodiment provides another image searching method, referring to fig. 4, which is a schematic flow chart of another image searching method provided in the present embodiment, and the image searching method is implemented by a computer device, for example, the image searching method may include the following steps S401 to S406:
s401, after the contour group to be loaded is determined in the target image, the corresponding contour group is acquired and loaded to the contour searching unit so as to refresh the storage space of the contour searching unit.
Wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from a target image; the embodiment of the application is described by taking an 8-neighborhood as an example, one neighborhood corresponds to one motion direction, 8 motion directions exist in the 8-neighborhood, and the 1 st to 8 th motion directions are respectively: right, upper left, lower right 8 motion directions; the related process of step S401 is similar to the related process of step S201, and will not be described here.
S402, calling a contour searching unit to search contour points based on N motion directions from a first contour block in the currently loaded contour group.
The related process of step S402 is similar to that of step S202, and will not be described herein.
S403, determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; and predicting the confidence of the current motion direction at the current moment.
Wherein, the current movement direction refers to: the contour point searched last time points to the direction of the contour point searched currently, and the current movement direction is one of N movement directions; wherein, any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion; the greater the probability of continuing to search for contour points along a certain direction of motion, the greater the probability of continuing to search for contour points along that direction of motion, and also the greater the probability of a contour moving after a certain contour point toward that direction of motion. The confidence of the current motion direction at the current moment is used for indicating: at the next moment of the current moment (i.e. the moment of the next search), the probability of continuing to search for contour points along the current movement direction, for example, the probability of continuing to search for contour points along the "lower" movement direction at the next moment of the current moment, i.e. the probability of continuing to search for contour points along the "lower" movement direction, may indicate the next search, and the probability of searching for contour points along the "lower" movement direction.
In a possible embodiment, the computer device may perform the following steps in predicting the confidence level of the current movement direction at the current moment: determining the movement direction of the previous moment and acquiring the confidence coefficient of the current movement direction at the previous moment; the previous time refers to the time when the contour point is searched last time; and carrying out consistency check on the motion direction at the previous moment and the current motion direction, and predicting the confidence coefficient of the current motion direction at the current moment according to the consistency check result and the confidence coefficient of the current motion direction at the previous moment. The motion direction of the previous moment is: the contour point searched last time points to the direction of the contour point searched last time; it should be noted that, if the currently loaded contour set is the contour set loaded to the contour searching unit first in the target image, if the current search is a contour point search performed by taking the starting contour point as the contour point searched last time, the current movement direction, that is, the direction in which the starting contour point points to the currently searched contour point, the movement direction at the previous time in this case may be the initial movement direction, and the initial movement direction may be configured in advance according to specific requirements.
Aiming at the process that the computer equipment predicts the confidence coefficient of the current movement direction at the current moment according to the consistency check result and the confidence coefficient of the current movement direction at the previous moment, in a feasible implementation mode, the computer equipment can call a target confidence coefficient prediction model to predict the confidence coefficient according to the position coordinates of the currently searched contour point to obtain a predicted value; and calibrating the predicted value according to the consistency check result and the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment. The target confidence prediction model may be a probability model (i.e., probability distribution function) selected according to specific requirements, for example, probability distribution functions such as a normal distribution function, a uniform and separate function, a bernoulli distribution function, and the like may be selected. The computer device calibrates the predicted value according to the consistency check result and the confidence coefficient of the current movement direction at the previous moment, and when the confidence coefficient of the current movement direction at the current moment is obtained, the computer device can execute the following steps: if the consistency check result is a result passing the consistency check, carrying out summation operation on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment; if the consistency check result is a result which does not pass the consistency check, carrying out difference operation on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment. In an alternative embodiment, if the consistency check result is a result passing the consistency check, the predicted value may also be used as the confidence level of the current motion direction at the current moment.
The result of the consistency check indicates that the current movement direction is the same as the movement direction of the previous moment, namely the current movement direction is the same as the movement direction of the previous moment, and the result of the consistency check does not indicate that the current movement direction is different from the movement direction of the previous moment, namely the current movement direction is different from the movement direction of the previous moment; optionally, when the summation operation is performed on the confidence coefficient and the predicted value of the current motion direction at the previous moment, the summation operation may be performed by adopting a direct summation mode, a weighted summation mode, and the like, where the corresponding weight may be set according to a specific requirement when the weighted summation mode is adopted, and the embodiment of the present application is not limited, and is described later by taking the direct summation mode as an example; similarly, when the difference value is calculated on the confidence coefficient of the current movement direction at the previous moment and the predicted value, the difference value may also be calculated by adopting modes such as direct difference calculation and weighted difference calculation, which is not limited in this embodiment, the following description of the embodiment of the application will take the direct difference calculation mode as an example, in an exemplary embodiment, the difference value obtained by subtracting the predicted value from the confidence coefficient of the current movement direction at the previous moment may be used as the confidence coefficient of the current movement direction at the current moment, and in another exemplary embodiment, the difference value obtained by subtracting the confidence coefficient of the current movement direction at the previous moment from the predicted value may be used as the confidence coefficient of the current movement direction at the current moment.
In a possible implementation manner, since the current motion direction is one of N motion directions, the embodiment of the application provides that the confidence prediction model is supported to be configured for each motion direction of the N motion directions, and the confidence prediction model configured for each motion direction can be configured according to specific requirements, and the confidence prediction model configured for each motion direction can be the same model or different model. Based on the knowledge, each of the N motion directions is configured with a confidence prediction model, and each time a contour point is searched and a corresponding motion direction is determined, the confidence prediction model corresponding to each motion direction is used for independently predicting the confidence of the corresponding motion direction at the corresponding moment; the target confidence prediction model refers to: confidence prediction model corresponding to current motion direction; the step of predicting the confidence coefficient of the current motion direction at the current moment is executed by calling a target confidence coefficient prediction model; in the process of predicting the confidence coefficient of the current moving direction at the current moment, the confidence coefficient of the current moving direction at the previous moment is predicted by the target confidence coefficient prediction model history. The confidence coefficient prediction model corresponding to any motion direction is similar to the target confidence coefficient prediction model (namely, the confidence coefficient prediction model corresponding to the current motion direction) in a mode for independently predicting the confidence coefficient of the corresponding motion direction at the corresponding moment, namely, the confidence coefficient prediction model corresponding to any motion direction is called, and confidence coefficient prediction is carried out according to the position coordinates of the currently searched contour point, so that a predicted value (corresponding to any motion direction) is obtained; and calibrating the predicted value (corresponding to any motion direction) according to the consistency check result and the confidence coefficient of any motion direction at the previous moment to obtain the confidence coefficient of any motion direction at the current moment.
If the consistency check result is a result of passing the consistency check, that is, the current movement direction and the movement direction of the previous moment are the same direction, the confidence of the nth movement direction in the N movement directions at the current moment can be shown by the following formula 1.1:
confidence(t1)(n)=confidence(t0)(n)+f(x1*α(n)+y1*β(n)) (1.1)
wherein t1 represents the current time, t0 represents the previous time, n.epsilon.1, N ]; confidence (t 1) (n) represents the confidence of the nth motion direction at the current moment, and confidence (t 0) (n) represents the confidence of the nth motion direction at the previous moment; f (x 1 x alpha (n) +y1 x beta (n)) represents a predicted value (corresponding to the nth motion direction) obtained by invoking a confidence prediction model corresponding to the nth motion direction according to the position coordinate of the currently searched contour point, wherein the position coordinate of the currently searched contour point is represented by (x 1, y 1), alpha (n) and beta (n) can represent configurable model parameters used by the confidence prediction model corresponding to the nth motion direction, and the configurable model parameters can be configured according to specific requirements.
If the consistency check result is a result that fails the consistency check, that is, the current movement direction and the movement direction of the previous moment are different directions, the confidence of the nth movement direction at the current moment can be shown by the following formula 1.2:
confidence(t1)(n)=confidence(t0)(n)-f(x1*α(n)+y1*β(n)) (1.2)
In a possible implementation manner, the embodiment of the present application provides that supporting configuring an information table for each motion direction of N motion directions, where the information table is used to store a confidence level predicted by a confidence level prediction model corresponding to the corresponding motion direction, and may specifically be used to store: and the confidence coefficient of the corresponding motion direction at the previous moment is predicted by adopting a confidence coefficient prediction model corresponding to the corresponding motion direction. Based on this, each of the N directions of motion is configured with an information table, any of which is used to store: confidence predicted by the confidence prediction model corresponding to the corresponding motion direction; in the process of predicting the confidence coefficient of the current movement direction at the current moment by the target confidence coefficient prediction model, the confidence coefficient of the current movement direction at the previous moment is obtained from an information table corresponding to the current movement direction. After the confidence coefficient of the current movement direction at the current moment is determined, the computer equipment can adopt the confidence coefficient of the current movement direction at the current moment predicted by the target confidence coefficient prediction model to refresh an information table corresponding to the current movement direction; that is, the confidence coefficient of the current movement direction at the current moment can be stored in the table item for storing the confidence coefficient of the current movement direction at the previous moment in the information table corresponding to the current movement direction, and similarly, the information table corresponding to the corresponding movement direction can be refreshed by adopting the confidence coefficient of any movement direction at the current moment, so that the confidence coefficient of each direction at the previous moment of each search can be quickly searched from the information table corresponding to each direction during each search.
Referring to table 1 below, taking an 8-neighborhood example, an information table corresponding to each movement direction is exemplarily shown, where an information table corresponding to any movement direction may be used to store: the confidence coefficient of the motion direction at the previous moment and the confidence coefficient of the motion direction at the current moment can be optionally used for storing data such as a confidence coefficient prediction model corresponding to the motion direction, a configurable model parameter used by the confidence coefficient prediction model corresponding to the motion direction, a predicted value corresponding to the motion direction and the like.
TABLE 1
In another possible implementation manner, if the consistency check result is a result passing the consistency check, the confidence coefficient of the current movement direction at the previous moment is increased, and the confidence coefficient of the current movement direction at the current moment is obtained; if the consistency check result is a result which does not pass the consistency check, the confidence coefficient of the current movement direction at the previous moment is reduced, and the confidence coefficient of the current movement direction at the current moment is obtained. When increasing the confidence level of the current movement direction at the previous time, an adaptive increasing mode may be selected according to specific requirements, for example, the embodiment of the application is not limited, and may be increased based on a preset amplification factor (that is, the preset amplification factor is multiplied by the confidence level of the current movement direction at the previous time, where the preset amplification factor may be configured according to requirements), or may be increased based on a preset value (that is, the preset value is superimposed on the confidence level of the current movement direction at the previous time, where the preset value may be configured according to requirements), or the like; similarly, when the confidence level of the current movement direction at the previous moment is reduced, an adaptive reduction manner may be selected according to specific requirements, which is not limited in this embodiment, for example, the reduction may be based on a preset reduction coefficient (i.e. the preset reduction coefficient may be multiplied by the confidence level of the current movement direction at the previous moment and configured according to requirements), and further, the reduction may be based on a preset value (i.e. the preset value may be subtracted based on the confidence level of the current movement direction at the previous moment and configured according to requirements), and so on.
In a further possible implementation manner, if the consistency check result is a result passing the consistency check, the confidence level of the current movement direction at the previous moment is used as the confidence level of the current movement direction at the current moment; if the consistency check result is a result which does not pass the consistency check, the confidence coefficient of the current movement direction at the current moment is obtained based on the confidence coefficient of the current movement direction at the previous moment. If the consistency check result is a result of failing the consistency check, any one of the modes for obtaining the confidence coefficient of the current movement direction at the current moment, which are shown in the case of failing the consistency check, may be selected according to specific requirements, for example, difference operation is performed on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment, or the confidence coefficient of the current movement direction at the previous moment is reduced to obtain the confidence coefficient of the current movement direction at the current moment.
In a possible embodiment, the computer device may perform the following steps in predicting the confidence level of the current movement direction at the current moment: determining the movement direction of the previous moment and acquiring the confidence coefficient of the movement direction of the previous moment; the previous time refers to the time when the contour point is searched last time; and carrying out consistency check on the motion direction at the previous moment and the current motion direction, and predicting the confidence coefficient of the current motion direction at the current moment according to the consistency check result and the confidence coefficient of the motion direction at the previous moment. The confidence of the motion direction at the previous moment refers to: the confidence coefficient of the motion direction of the previous moment at the previous moment, when the motion direction of the current moment is the same as the current motion direction, the confidence coefficient of the motion direction of the previous moment at the previous moment, namely the confidence coefficient of the current motion direction at the previous moment; the relevant process of predicting the confidence coefficient of the current movement direction at the current moment according to the consistency check result and the confidence coefficient of the movement direction at the previous moment is similar to the relevant process of predicting the confidence coefficient of the current movement direction at the current moment according to the consistency check result and the confidence coefficient of the current movement direction at the previous moment.
Aiming at the process that the computer equipment predicts the confidence coefficient of the current moving direction at the current moment according to the consistency check result and the confidence coefficient of the moving direction at the previous moment, in a feasible implementation mode, the computer equipment can call a target confidence coefficient prediction model to predict the confidence coefficient according to the position coordinates of the currently searched contour point to obtain a predicted value; and calibrating the predicted value according to the consistency check result and the confidence coefficient of the motion direction at the previous moment to obtain the confidence coefficient of the current motion direction at the current moment. The target confidence prediction model may be a probability model (i.e., probability distribution function) selected according to specific requirements, for example, probability distribution functions such as a normal distribution function, a uniform and separate function, a bernoulli distribution function, and the like may be selected. The computer device calibrates the predicted value according to the consistency check result and the confidence coefficient of the motion direction at the previous moment, and when the confidence coefficient of the current motion direction at the current moment is obtained, the following steps can be executed: if the consistency check result is a result passing the consistency check, carrying out summation operation on the confidence coefficient of the motion direction at the previous moment and the predicted value to obtain the confidence coefficient of the current motion direction at the current moment; if the consistency check result is a result which does not pass the consistency check, carrying out difference operation on the confidence coefficient of the motion direction at the previous moment and the predicted value to obtain the confidence coefficient of the current motion direction at the current moment. In an alternative embodiment, if the consistency check result is a result passing the consistency check, the predicted value may also be used as the confidence level of the current motion direction at the current moment. In another alternative embodiment, if the consistency check result is a result passing the consistency check, the confidence of the motion direction at the previous moment is increased, and the confidence of the current motion direction at the current moment is obtained. In yet another alternative embodiment, if the consistency check result is a result of passing the consistency check, the confidence of the motion direction at the previous moment is used as the confidence of the current motion direction at the current moment.
In a possible implementation manner, since the current motion direction is one of N motion directions, the embodiment of the application provides that the confidence prediction model is supported to be configured for each motion direction of the N motion directions, and the confidence prediction model configured for each motion direction can be configured according to specific requirements, and the confidence prediction model configured for each motion direction can be the same model or different model. Based on the knowledge, each of the N motion directions is configured with a confidence prediction model, and each time a contour point is searched and a corresponding motion direction is determined, the confidence prediction model corresponding to each motion direction is used for independently predicting the confidence of the corresponding motion direction at the corresponding moment; the target confidence prediction model refers to: confidence prediction model corresponding to current motion direction; the step of predicting the confidence coefficient of the current motion direction at the current moment is executed by calling a target confidence coefficient prediction model; in the process of predicting the confidence of the current moving direction at the current moment, the confidence of the moving direction of the previous moment is predicted by the confidence prediction model history corresponding to the moving direction of the previous moment.
In a possible implementation manner, the embodiment of the present application provides that supporting configuring an information table for each motion direction of N motion directions, where the information table is used to store a confidence level predicted by a confidence level prediction model corresponding to the corresponding motion direction, and may specifically be used to store: and the confidence coefficient of the corresponding motion direction at the previous moment is predicted by adopting a confidence coefficient prediction model corresponding to the corresponding motion direction. Based on this, each of the N directions of motion is configured with an information table, any of which is used to store: confidence predicted by the confidence prediction model corresponding to the corresponding motion direction; in the process of predicting the confidence coefficient of the current movement direction at the current moment by the target confidence coefficient prediction model, the confidence coefficient of the movement direction at the previous moment is obtained from an information table corresponding to the movement direction at the previous moment; after the confidence coefficient of the current movement direction at the current moment is determined, the computer equipment can adopt the confidence coefficient of the current movement direction at the current moment predicted by the target confidence coefficient prediction model to refresh an information table corresponding to the current movement direction; that is, the confidence coefficient of the current movement direction at the current moment can be stored in the table item for storing the confidence coefficient of the current movement direction at the previous moment in the information table corresponding to the current movement direction, and similarly, the information table corresponding to the corresponding movement direction can be refreshed by adopting the confidence coefficient of any movement direction at the current moment, so that the confidence coefficient of each direction at the previous moment of each search can be quickly searched from the information table corresponding to each direction during each search.
S404, if the current movement direction falls into the preset area and the confidence coefficient of the current movement direction at the current moment is greater than or equal to a threshold value, determining a contour group to be loaded in the target image.
In one possible implementation, the computer device may determine whether the current motion direction falls within the predicted area based on the currently searched contour point, determine that the current motion direction falls within the predicted area when the currently searched contour point falls within the predicted area, and otherwise determine that the current motion direction does not fall within the predicted area. For example, if the size of the contour block is (m×k) (n×k), that is, the length of the contour block is (m×k), and the width is (n×k), if the center point coordinate of the first contour block is expressed as (x, y), the distance between the center point coordinate of the first contour block and the boundary of the first contour block may be: [ (M-2) ×k/2, (N-2) ×k/2], four vertex coordinates based on the first contour block may be: and (x+ (M-2) K/2, y+ (N-2) K/2], [ x+ (M-2) K/2, y- (N-2) K/2], [ x- (M-2) K/2, y+ (N-2) K/2], [ x- (M-2) K/2, y- (N-2) K/2], then, based on the positional coordinates of the currently searched contour point and the positional relationship between the four vertex coordinates of the first contour block, whether the currently searched contour point falls into the prediction area can be judged.
In a possible implementation manner, when the computer device determines the profile group to be loaded in the target image, the block attribute information of the first profile block to be loaded may be predicted to obtain predicted block attribute information; determining a first contour block to be loaded and N second contour blocks corresponding to the corresponding first contour blocks in a target image according to the attribute information of the predicted blocks; and taking the determined first contour block and the corresponding N second contour blocks as a contour group to be loaded.
In a possible implementation, any block attribute information includes a center point coordinate of a corresponding contour block, and the predicted block attribute information includes a predicted center point coordinate; based on the above, the first contour block to be loaded in the target image may be determined according to the predicted center point coordinates, specifically, may be determined based on the predicted center point coordinates and the configured contour block sizes, and further, the contour blocks in N adjacent areas of the determined first contour block may be determined as corresponding second contour blocks.
In a possible implementation manner, when any block attribute information includes a center point coordinate of a corresponding contour block and the predicted block attribute information includes a predicted center point coordinate, the computer device predicts the block attribute information of the first contour block to be loaded, and obtains the predicted block attribute information, the position coordinate of the currently searched contour point may be obtained as the predicted center point coordinate. Alternatively, the coordinates of a pixel point may be selected from the contour block where the currently searched contour point is located, as the coordinates of the predicted center point.
In another possible implementation manner, in the case that any block attribute information includes the center point coordinates of the corresponding contour block and the predicted block attribute information includes the predicted center point coordinates, since the current motion direction is one of N motion directions, each motion direction of the N motion directions is configured with one information table, the present application supports writing in any information table, and at least one center point coordinate set in the corresponding motion direction; based on this, any information table includes: at least one central point coordinate set in the corresponding motion direction, when the computer equipment predicts the block attribute information of the first contour block to be loaded to obtain predicted block attribute information, the information table corresponding to the current motion direction can be used as a target information table according to the corresponding relation between the motion direction and the information table; and prefetching a central point coordinate from the central point coordinates included in the target information table as a predicted central point coordinate.
For the currently loaded contour group, the coordinates of the pixel points in the nth second contour block are supported to be set as the coordinates of the central point in the nth motion direction, wherein the nth second contour block is the second contour block positioned in the nth neighbor of the first contour block, and the nth motion direction is the motion direction corresponding to the nth neighbor; the coordinates of the pixel points in the nth second contour block are set as the coordinates of the central point in the nth motion direction, which may be selected according to specific requirements, for example, the coordinates of the pixel points in the nth second contour block may be randomly selected, the coordinates of the pixel points in the nth motion direction may be set as the coordinates of the central point in the nth motion direction, and for example, the coordinates of the central point of the nth second contour block may be selected, and the coordinates of the central point in the nth motion direction may be set. It is known that when loading a new profile group, at least one center point coordinate set in the corresponding movement direction stored in the respective information table is refreshed, the refreshed center point coordinate being adapted to the new profile group. Referring to fig. 5a, for a schematic diagram of determining a profile group to be loaded according to an embodiment of the present application, if a currently loaded profile group is shown as a 500 mark, and the predicted center point coordinates are the position coordinates of the currently searched profile point, as shown as a 501 mark, the determined profile group to be loaded may be shown as a 502 mark.
Referring to table 2 below, taking an 8-neighborhood example, an information table corresponding to each movement direction is exemplarily shown, where an information table corresponding to any movement direction may be used to store: the confidence coefficient of the motion direction at the previous moment, the confidence coefficient of the motion direction at the current moment, and at least one center point coordinate set in the motion direction, optionally, can also be used for storing a confidence coefficient prediction model corresponding to the motion direction, a configurable model parameter used by the confidence coefficient prediction model corresponding to the motion direction, a prediction value corresponding to the motion direction, and the like; of course, if there is a requirement for writing other data, corresponding data (such as the size of the contour block) may be written into the information table corresponding to the motion direction; illustratively, table 2 describes an example in which one center point coordinate is set for each movement direction.
TABLE 2
In a possible implementation manner, the target image may be divided into a plurality of contour blocks according to the configured contour block sizes, any one block attribute information includes identification information of a corresponding contour block, and the prediction block attribute information includes prediction identification information; optionally, the identification information may be a contour block label or a center point coordinate of the contour block, where the contour block label may be obtained by performing label processing on a plurality of contour blocks obtained by dividing the target image according to a preset label rule, and the preset label rule may be: the labels are incremented row by row in an up-to-down order, the labels are incremented row by row in a down-to-up order, and so on. Based on the above, the first contour block to be loaded in the target image can be determined according to the prediction identification information, and then the contour blocks in the N adjacent areas of the determined first contour block can be determined as corresponding second contour blocks.
In a possible implementation manner, when any block attribute information includes identification information of a corresponding contour block and the predicted block attribute information includes prediction identification information, the computer device predicts the block attribute information of the first contour block to be loaded, and obtains the predicted block attribute information, the identification information of the contour block where the currently searched contour point is located may be obtained as the predicted identification information.
In another possible implementation manner, in the case that any block attribute information includes identification information of a corresponding contour block and the predicted block attribute information includes predicted identification information, since the current motion direction is one of N motion directions, each motion direction of the N motion directions is configured with an information table, writing in any information table is supported, and the identification information set in the corresponding motion direction is set in the application; based on this, any information table includes: the computer equipment predicts the block attribute information of the first contour block to be loaded according to the identification information set in the corresponding motion direction, and can take the information table corresponding to the current motion direction as a target information table according to the corresponding relation between the motion direction and the information table when obtaining the predicted block attribute information; and selecting the identification information included in the target information table as prediction identification information. For the currently loaded contour group, the identification information of the nth second contour block is supported to be set as the identification information of the nth motion direction, wherein the nth second contour block is the second contour block positioned in the nth neighbor of the first contour block, and the nth motion direction is the motion direction corresponding to the nth neighbor. It will be appreciated that when a new profile group is loaded, the identification information stored in the respective information table, which is set in the respective direction of movement, is refreshed, the refreshed identification information being adapted to the new profile group.
Referring to fig. 5b, for another schematic diagram for determining a profile group to be loaded according to the embodiment of the present application, if a preset labeling rule is adopted, the labeling rule is that the labeling rule is gradually increased from top to bottom, and the labeling of the profile blocks in the target image is shown in each profile block; if the currently loaded contour set is shown as 510 marks, the position coordinates of the currently searched contour points are shown as 511 marks, and the predicted identification information is the contour block number 10, the determined contour set to be loaded may be shown as 512 marks.
And S405, if the current movement direction does not fall into the preset area, or if the current movement direction falls into the preset area and the confidence of the current movement direction at the current moment is smaller than a threshold value, continuing to search the contour points in the currently loaded contour group.
S406, iteratively executing the process until the end search event of the target image is detected, and determining the contour of the target image according to each contour point searched by the contour search unit.
In one possible implementation, the contour searching unit supports running a plurality of contour searching acceleration engines, and is used for carrying out parallel processing on a plurality of images to be subjected to image searching to obtain the contours of the images respectively so as to improve the speed of image searching.
In the embodiment of the application, in the process of searching to obtain the contour of the target image, the target image is not directly loaded in the contour searching unit, but a contour group formed by a plurality of image blocks divided from the target image is loaded, and in the process of calling the contour searching unit to search contour points in the currently loaded contour group, whether the contour searching unit needs to load a new contour group or not is judged in real time every time when one contour point is searched, so that the target image can be continuously loaded to the contour searching unit in a contour group mode to refresh the storage space of the contour searching unit and search until the contour of the target image is obtained; in the process of searching and obtaining the contour of the target image based on the contour searching unit, only one contour group-sized storage space is occupied in the storage space of the contour searching unit all the time, and compared with the process of loading the image in the contour searching unit, the occupation of the storage space of the contour searching unit can be reduced, the storage resource of the contour searching unit can be saved, and higher performance can be met by smaller storage space.
In addition, in the process of searching for one contour point, whether the contour searching unit needs to load a new contour group or not is judged in real time, the probability of continuously searching for the contour point along the current motion direction (namely, the confidence level of the current motion direction at the current moment) can be predicted based on the context (namely, the current motion direction and the historical motion direction) in the image searching for the currently searched contour point, whether the new contour group needs to be loaded or not can be accurately judged in real time according to the predicted probability of continuously searching for the contour point along the current motion direction, and when the new contour group needs to be loaded is judged, the contour group to be loaded is determined, and then the storage time required by loading the contour group to be loaded is pre-fetched, so that the storage time required by loading of a target image to the contour searching unit can be realized continuously in a contour group mode, and the searching overall performance can be improved on the basis of refreshing the storage space of the contour searching unit and searching.
The image searching method provided by the embodiment of the application can be deployed in an image searching device, and the image searching device is operated by computer equipment to realize the image searching method; the image search means may specifically be a chip running in a computer device, a hardware system (including but not limited to application specific integrated circuits, programmable logic circuits, processors, etc.), or central processing system firmware (such as a central processor (Central Processing Unit, CPU) or a graphics processor (graphics processing unit, GPU)), etc. The following exemplarily describes a structure of an image search apparatus according to an embodiment of the present application; referring to fig. 6, a schematic structural diagram of an image searching apparatus according to an embodiment of the present application may include: loading unit, contour searching unit and prediction unit. The loading unit may be configured to perform a data loading related processing logic, for example, may be configured to obtain a profile group to load to the profile searching unit, the profile searching unit may be configured to perform an image searching related processing logic, for example, may be configured to perform a profile point searching in the profile group to obtain a profile point, and the prediction unit may be configured to perform a predicted related processing logic of the profile group to be loaded, for example, may be configured to determine whether a new profile group needs to be loaded after each profile point is searched. When the image searching method proposed in the embodiment of the present application is cooperatively executed by the three units, the execution process may be approximately described as follows:
The loading unit can acquire corresponding profile groups to be loaded to the profile searching unit after acquiring block attribute information corresponding to the profile groups to be loaded, so as to refresh the storage space of the profile searching unit; wherein the currently loaded profile group comprises: the prediction method comprises the steps of a first contour block and N second contour blocks, wherein the N second contour blocks comprise contour blocks located in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, and the area where the N second contour blocks are located is a prediction area.
The contour searching unit may perform contour point searching based on N moving directions from the first contour block in the currently loaded contour group, and when one contour point is searched, transmit related information of the currently searched contour point to the prediction unit; the related information of the currently searched contour point comprises the following steps: the relevant data required for predicting the block attribute information corresponding to the profile group to be loaded (i.e., the new profile group) may include at least the position coordinates of the currently searched profile point.
After receiving the related information of the currently searched contour point, the prediction unit may determine the current motion direction of the contour based on the related information of the currently searched contour point, and predict the confidence of the current motion direction at the current time.
If the current movement direction does not fall into the preset area, or if the current movement direction falls into the preset area and the confidence of the current movement direction at the current moment is smaller than a threshold value, feedback information for indicating to continue to search the contour points in the currently loaded contour group is returned to the contour searching unit, so that the contour searching unit continues to search the contour points in the currently loaded contour group.
If the current motion direction falls into the prediction area and the confidence coefficient of the current motion direction at the current moment is greater than or equal to a threshold value, predicting block attribute information corresponding to a contour group to be loaded (namely, a new contour group) (namely, predicting the block attribute information of a first contour block to be loaded) to obtain predicted block attribute information; and returning the predicted block attribute information to the contour searching unit, and transmitting the predicted block attribute information to the loading unit, so that the loading unit determines a contour group to be loaded according to the predicted block attribute information, and acquires the corresponding contour group to load to the contour searching unit.
The three units cooperatively and iteratively execute the process until the contour searching unit detects the ending searching event of the target image, and the contour of the target image is determined according to each contour point searched by the contour searching unit.
Wherein the contour search unit may also be used for performing pre-processing tasks as well as post-processing tasks. The preprocessing task refers to: the operations performed before the contour point search by the contour searching unit after each loading of the contour group may include, for example, operations of adjustment of an input format, resolution adjustment, filtering, image processing (e.g., image enhancement, noise reduction, filtering, smoothing), etc., the preprocessing task is an operation performed to make the loaded contour group satisfy the requirement of the contour point search, and the operations possibly involved in the preprocessing task may be selected according to specific requirements, and it may be known that the preprocessing task is not necessary in the case where the loaded contour group already satisfies the requirement of the contour point search. The post-processing task refers to: after determining the outline of the target image, the task designed and executed according to the specific business requirement can be known that the post-processing task is designed and executed according to the requirement and is not necessary for realizing the image processing method provided by the application; for example, the profile searching unit supports running multiple profile search acceleration engines, and is configured to perform parallel processing on multiple images to be searched for images to obtain profiles of the respective images, so as to increase the rate of image searching, and the post-processing task may include, but is not limited to, processing on the profiles obtained by the respective profile search acceleration engines, for example, screening, merging, sorting, packaging, etc. the profiles obtained by the respective profile search acceleration engines may include, for example, under a certain service requirement, screening processing is required if the profiles meeting a specific condition need to be extracted, and merging processing is required if the profiles of the multiple images need to be merged and output under a certain service requirement.
The loading unit may be further configured to load relevant data that is required to be loaded to implement the image searching method provided in the embodiment of the present application. For example, the loading unit may be configured to load input instructions, where the input instructions may include data such as image resolution (e.g., height, width), parameters required for a pre-processing task, or parameters required for a post-processing task; as another example, the loading unit may be configured to load configurable data (including the contour block size, the initial motion direction, the confidence prediction model correspondingly configured in different motion directions, and the configurable model parameters used by the model, etc. data mentioned in the foregoing embodiments) required for implementing the image searching method; for another example, the loading unit may also be used to acquire images in the event that there is an image acquisition demand.
Therefore, the loading unit and the prediction unit provided in the embodiments of the present application are a novel technology, for the contour point currently searched in the image searching process, the current movement direction of the contour can be determined and the probability of continuing to search the contour point along the current movement direction (that is, the confidence of the current movement direction at the current moment) can be predicted, whether a new contour group needs to be loaded can be accurately determined in real time according to the predicted probability of continuing to search the contour point along the current movement direction, and when the new contour group needs to be loaded is determined, the contour group to be loaded is determined, so that the target image can be continuously loaded to the contour searching unit in the contour group mode, the storage space of the contour searching unit is refreshed and searched, the occupation of the storage space of the contour searching unit can be reduced, the storage resource of the contour searching unit is saved, and the searching overall performance is improved.
Based on the description of the embodiment of the image searching method, the embodiment of the application also discloses another image searching device; the image search apparatus may be a computer program (including one or more instructions) running on a computer device or chip, and may perform the steps of the method flow shown in fig. 2 or 4. Referring to fig. 7, the image search apparatus may operate as follows:
an obtaining unit 701, configured to obtain, after determining a profile group to be loaded in a target image, a corresponding profile group to be loaded into a profile searching unit, so as to refresh a storage space of the profile searching unit; wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from the target image;
a processing unit 702, configured to invoke the contour searching unit to perform contour point searching based on the N motion directions from the first contour block in the currently loaded contour group;
The processing unit 702 is further configured to determine, for each time a contour point is searched, a current motion direction of the contour based on the currently searched contour point; predicting the confidence coefficient of the current motion direction at the current moment; any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion;
the processing unit 702 is further configured to determine a profile group to be loaded in the target image if the current motion direction falls within the prediction area and the confidence level of the current motion direction at the current time is greater than or equal to a threshold;
the above-mentioned process is iteratively performed until the processing unit 702 detects an end search event of the target image, and determines the contour of the target image according to each contour point searched by the contour search unit.
In one embodiment, the processing unit 702, when configured to determine the set of contours to be loaded in the target image, may be specifically configured to:
predicting block attribute information of a first contour block to be loaded to obtain predicted block attribute information;
determining a first contour block to be loaded and N second contour blocks corresponding to the corresponding first contour blocks in the target image according to the attribute information of the prediction blocks;
And taking the determined first contour block and the corresponding N second contour blocks as a contour group to be loaded.
In another embodiment, any block attribute information includes center point coordinates of a corresponding contour block, and the predicted block attribute information includes predicted center point coordinates;
the processing unit 702, when configured to predict the block attribute information of the first contour block to be loaded to obtain predicted block attribute information, may be specifically configured to: and acquiring the position coordinates of the currently searched contour point to serve as the predicted center point coordinates.
In another embodiment, any block attribute information includes center point coordinates of a corresponding contour block, and the predicted block attribute information includes predicted center point coordinates;
the current movement direction is one of the N movement directions, each movement direction of the N movement directions is configured with an information table, and any information table comprises: at least one center point coordinate set in a corresponding movement direction;
the processing unit 702, when configured to predict the block attribute information of the first contour block to be loaded to obtain predicted block attribute information, may be specifically configured to: according to the corresponding relation between the motion direction and the information table, taking the information table corresponding to the current motion direction as a target information table; and pre-fetching a central point coordinate from central point coordinates included in the target information table as a predicted central point coordinate.
In another embodiment, the current movement direction refers to: the contour point searched last time points to the direction of the contour point searched currently;
the processing unit 702, when configured to predict the confidence level of the current motion direction at the current moment, may be specifically configured to:
determining the movement direction of the previous moment and acquiring the confidence coefficient of the current movement direction at the previous moment; wherein the previous time is the time when the contour point was last searched;
and carrying out consistency check on the motion direction at the previous moment and the current motion direction, and predicting the confidence coefficient of the current motion direction at the current moment according to a consistency check result and the confidence coefficient of the current motion direction at the previous moment.
In another embodiment, the processing unit 702, when configured to predict the confidence level of the current motion direction at the current time according to the consistency check result and the confidence level of the current motion direction at the previous time, may be specifically configured to:
invoking a target confidence prediction model to predict the confidence according to the position coordinates of the currently searched contour points, and obtaining a predicted value;
and calibrating the predicted value according to the consistency check result and the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment.
In another embodiment, when the processing unit 702 is configured to calibrate the predicted value according to the consistency check result and the confidence coefficient of the current motion direction at the previous time, it may be specifically configured to:
if the consistency check result is a consistency check result, carrying out summation operation on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment;
and if the consistency check result is a result which does not pass the consistency check, carrying out difference value operation on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment.
In another embodiment, the current movement direction is one of the N movement directions;
each of the N moving directions is provided with a confidence coefficient prediction model, and after each contour point is searched and the corresponding moving direction is determined, the confidence coefficient prediction model corresponding to each moving direction is used for independently predicting the confidence coefficient of the corresponding moving direction at the corresponding moment;
The target confidence prediction model refers to: a confidence prediction model corresponding to the current motion direction; the step of predicting the confidence coefficient of the current motion direction at the current moment is carried out by calling the target confidence coefficient prediction model; and the target confidence coefficient prediction model predicts the confidence coefficient of the current motion direction at the current moment in the process of predicting the confidence coefficient of the current motion direction at the current moment, and the obtained confidence coefficient of the current motion direction at the previous moment is predicted by the target confidence coefficient prediction model history.
In another embodiment, each of the N moving directions is configured with an information table, and any information table is used for storing: confidence predicted by the confidence prediction model corresponding to the corresponding motion direction;
in the process of predicting the confidence coefficient of the current movement direction at the current moment by the target confidence coefficient prediction model, acquiring the confidence coefficient of the current movement direction at the previous moment from an information table corresponding to the current movement direction;
the processing unit 702 is further configured to: and refreshing an information table corresponding to the current movement direction by adopting the confidence coefficient of the current movement direction predicted by the target confidence coefficient prediction model at the current moment.
In another embodiment, the processing unit 702, when configured to predict the confidence level of the current motion direction at the current time according to the consistency check result and the confidence level of the current motion direction at the previous time, may be specifically configured to:
if the consistency check result is a consistency check result, increasing the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment;
and if the consistency check result is a result which does not pass the consistency check, reducing the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment.
In another embodiment, the current movement direction refers to: the contour point searched last time points to the direction of the contour point searched currently;
the processing unit 702, when configured to predict the confidence level of the current motion direction, may be specifically configured to:
determining the movement direction of the previous moment and acquiring the confidence coefficient of the movement direction of the previous moment; wherein the previous time is the time when the contour point was last searched;
and carrying out consistency check on the motion direction of the previous moment and the current motion direction, and predicting the confidence coefficient of the current motion direction at the current moment according to a consistency check result and the confidence coefficient of the motion direction of the previous moment.
In another embodiment, the processing unit 702 is further configured to:
if the current motion direction does not fall into the prediction area, or if the current motion direction falls into the prediction area and the confidence of the current motion direction at the current moment is smaller than a threshold, continuing to search contour points in the currently loaded contour group.
According to another embodiment of the present application, each unit in the image searching apparatus shown in fig. 7 may be separately or completely combined into one or several additional units, or some unit(s) thereof may be further split into a plurality of units with smaller functions, which may achieve the same operation without affecting the implementation of the technical effects of the embodiments of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the image searching apparatus may also include other units, and in practical applications, these functions may also be implemented with assistance of other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present application, an image searching apparatus device as shown in fig. 7 may be constructed by running a computer program (including one or more instructions) capable of executing the steps involved in the respective methods as shown in fig. 2 or fig. 4 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and the like, and a storage element, and the image searching method of the embodiment of the present application may be implemented. The computer program may be recorded on, for example, a computer readable storage medium, and loaded into and executed by the computing device described above.
In the embodiment of the application, in the process of searching to obtain the contour of the target image, the target image is not directly loaded in the contour searching unit, but a contour group formed by a plurality of image blocks divided from the target image is loaded, and in the process of calling the contour searching unit to search contour points in the currently loaded contour group, whether the contour searching unit needs to load a new contour group or not is judged in real time every time when one contour point is searched, so that the target image can be continuously loaded to the contour searching unit in a contour group mode to refresh the storage space of the contour searching unit and search until the contour of the target image is obtained; in the process of searching and obtaining the contour of the target image based on the contour searching unit, only one contour group-sized storage space is occupied in the storage space of the contour searching unit all the time, and compared with the process of loading the image in the contour searching unit, the occupation of the storage space of the contour searching unit can be reduced, and the storage resources of the contour searching unit can be saved.
In addition, in the process of judging whether a new profile group needs to be loaded by the profile searching unit or not when one profile point is searched, the probability of continuously searching the profile point along the current motion direction can be predicted based on the current motion direction of the profile, whether the new profile group needs to be loaded or not can be accurately judged in real time according to the predicted probability of continuously searching the profile point along the current motion direction, and when the new profile group needs to be loaded is judged, the profile group to be loaded is determined, and then the profile group to be loaded can be prefetched, the storage time required by loading the profile group is hidden, and the searching overall performance is improved.
Based on the description of the method embodiment and the apparatus embodiment, the embodiment of the application also provides a computer device; the computer device may comprise a stand-alone device (e.g., one or more of a server, node, terminal, etc.), or may comprise components (e.g., a chip, software module, hardware module, etc.) internal to the stand-alone device. Referring to fig. 8, the computer device includes at least a processor 801, an input interface 802, an output interface 803, and a computer storage medium 804. Wherein the processor 801, input interface 802, output interface 803, and computer storage medium 804 within a computer device may be connected by bus or other means. The computer storage medium 804 may be stored in a memory of a computer device, the computer storage medium 804 being configured to store a computer program comprising one or more instructions, the processor 801 being configured to execute one or more instructions of the computer program stored by the computer storage medium 804. The processor 801, or CPU (Central Processing Unit ), is a computing core and a control core of a computer device, which is adapted to implement one or more instructions, in particular to load and execute one or more instructions to implement a corresponding method flow or a corresponding function.
In one embodiment, the processor 801 described in the embodiments of the present application may be used to implement a related process of image searching, and may specifically include: after determining a contour group to be loaded in a target image, acquiring a corresponding contour group and loading the corresponding contour group to a contour searching unit so as to refresh the storage space of the contour searching unit; wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from the target image; invoking the contour searching unit to perform contour point searching from the first contour block in the currently loaded contour group based on the N motion directions; determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; predicting the confidence coefficient of the current motion direction at the current moment; any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion; if the current movement direction falls into the prediction area and the confidence coefficient of the current movement direction at the current moment is greater than or equal to a threshold value, determining a contour group to be loaded in the target image; iteratively performing the above-mentioned process until an end search event of the target image is detected, determining the contour of the target image based on the contour points searched by the contour search unit, and so on.
The embodiment of the application also provides a computer storage medium (Memory), which is a Memory device in a computer device, and is used for storing computer programs and data. It is understood that the computer storage media herein may include both built-in storage media in a computer device and extended storage media supported by the computer device. The computer storage media provides storage space that stores an operating system of the computer device. Also stored in the memory space is a computer program comprising one or more instructions, which may be one or more program codes, adapted to be loaded and executed by the processor 801. The computer storage medium herein may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory; alternatively, it may be at least one computer storage medium located remotely from the aforementioned processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by a processor to implement the corresponding steps in the method embodiments described above with respect to FIG. 2 or FIG. 4; in particular implementations, one or more instructions in a computer storage medium may be loaded by a processor and perform the steps of:
After determining a contour group to be loaded in a target image, acquiring a corresponding contour group and loading the corresponding contour group to a contour searching unit so as to refresh the storage space of the contour searching unit; wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from the target image;
invoking the contour searching unit to perform contour point searching from the first contour block in the currently loaded contour group based on the N motion directions;
determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; predicting the confidence coefficient of the current motion direction at the current moment; any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion;
if the current movement direction falls into the prediction area and the confidence coefficient of the current movement direction at the current moment is greater than or equal to a threshold value, determining a contour group to be loaded in the target image;
And iteratively executing the process until the ending search event of the target image is detected, and determining the contour of the target image according to each contour point searched by the contour search unit.
In one embodiment, the processor 801, when configured to determine a set of contours to be loaded in the target image, may be specifically configured to:
predicting block attribute information of a first contour block to be loaded to obtain predicted block attribute information;
determining a first contour block to be loaded and N second contour blocks corresponding to the corresponding first contour blocks in the target image according to the attribute information of the prediction blocks;
and taking the determined first contour block and the corresponding N second contour blocks as a contour group to be loaded.
In another embodiment, any block attribute information includes center point coordinates of a corresponding contour block, and the predicted block attribute information includes predicted center point coordinates;
the processor 801, when configured to predict block attribute information of a first contour block to be loaded to obtain predicted block attribute information, may be specifically configured to: and acquiring the position coordinates of the currently searched contour point to serve as the predicted center point coordinates.
In another embodiment, any block attribute information includes center point coordinates of a corresponding contour block, and the predicted block attribute information includes predicted center point coordinates;
the current movement direction is one of the N movement directions, each movement direction of the N movement directions is configured with an information table, and any information table comprises: at least one center point coordinate set in a corresponding movement direction;
the processor 801, when configured to predict block attribute information of a first contour block to be loaded to obtain predicted block attribute information, may be specifically configured to: according to the corresponding relation between the motion direction and the information table, taking the information table corresponding to the current motion direction as a target information table; and pre-fetching a central point coordinate from central point coordinates included in the target information table as a predicted central point coordinate.
In another embodiment, the current movement direction refers to: the contour point searched last time points to the direction of the contour point searched currently;
the processor 801, when configured to predict the confidence level of the current motion direction at the current time, may be specifically configured to:
determining the movement direction of the previous moment and acquiring the confidence coefficient of the current movement direction at the previous moment; wherein the previous time is the time when the contour point was last searched;
And carrying out consistency check on the motion direction at the previous moment and the current motion direction, and predicting the confidence coefficient of the current motion direction at the current moment according to a consistency check result and the confidence coefficient of the current motion direction at the previous moment.
In another embodiment, when the processor 801 is configured to predict the confidence level of the current motion direction at the current time according to the consistency check result and the confidence level of the current motion direction at the previous time, the processor 801 may be specifically configured to:
invoking a target confidence prediction model to predict the confidence according to the position coordinates of the currently searched contour points, and obtaining a predicted value;
and calibrating the predicted value according to the consistency check result and the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment.
In another embodiment, when the processor 801 is configured to calibrate the predicted value according to the consistency check result and the confidence coefficient of the current movement direction at the previous time, the method may be specifically used to:
if the consistency check result is a consistency check result, carrying out summation operation on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment;
And if the consistency check result is a result which does not pass the consistency check, carrying out difference value operation on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment.
In another embodiment, the current movement direction is one of the N movement directions;
each of the N moving directions is provided with a confidence coefficient prediction model, and after each contour point is searched and the corresponding moving direction is determined, the confidence coefficient prediction model corresponding to each moving direction is used for independently predicting the confidence coefficient of the corresponding moving direction at the corresponding moment;
the target confidence prediction model refers to: a confidence prediction model corresponding to the current motion direction; the step of predicting the confidence coefficient of the current motion direction at the current moment is carried out by calling the target confidence coefficient prediction model; and the target confidence coefficient prediction model predicts the confidence coefficient of the current motion direction at the current moment in the process of predicting the confidence coefficient of the current motion direction at the current moment, and the obtained confidence coefficient of the current motion direction at the previous moment is predicted by the target confidence coefficient prediction model history.
In another embodiment, each of the N moving directions is configured with an information table, and any information table is used for storing: confidence predicted by the confidence prediction model corresponding to the corresponding motion direction;
in the process of predicting the confidence coefficient of the current movement direction at the current moment by the target confidence coefficient prediction model, acquiring the confidence coefficient of the current movement direction at the previous moment from an information table corresponding to the current movement direction;
the processor 801 is also configured to: and refreshing an information table corresponding to the current movement direction by adopting the confidence coefficient of the current movement direction predicted by the target confidence coefficient prediction model at the current moment.
In another embodiment, when the processor 801 is configured to predict the confidence level of the current motion direction at the current time according to the consistency check result and the confidence level of the current motion direction at the previous time, the processor 801 may be specifically configured to:
if the consistency check result is a consistency check result, increasing the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment;
And if the consistency check result is a result which does not pass the consistency check, reducing the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment.
In another embodiment, the current movement direction refers to: the contour point searched last time points to the direction of the contour point searched currently;
the processor 801, when used to predict the confidence level for the current direction of motion, may be specifically configured to:
determining the movement direction of the previous moment and acquiring the confidence coefficient of the movement direction of the previous moment; wherein the previous time is the time when the contour point was last searched;
and carrying out consistency check on the motion direction of the previous moment and the current motion direction, and predicting the confidence coefficient of the current motion direction at the current moment according to a consistency check result and the confidence coefficient of the motion direction of the previous moment.
In another embodiment, the processor 801 is further configured to:
if the current motion direction does not fall into the prediction area, or if the current motion direction falls into the prediction area and the confidence of the current motion direction at the current moment is smaller than a threshold, continuing to search contour points in the currently loaded contour group.
In the embodiment of the application, in the process of searching to obtain the contour of the target image, the target image is not directly loaded in the contour searching unit, but a contour group formed by a plurality of image blocks divided from the target image is loaded, and in the process of calling the contour searching unit to search contour points in the currently loaded contour group, whether the contour searching unit needs to load a new contour group or not is judged in real time every time when one contour point is searched, so that the target image can be continuously loaded to the contour searching unit in a contour group mode to refresh the storage space of the contour searching unit and search until the contour of the target image is obtained; in the process of searching and obtaining the contour of the target image based on the contour searching unit, only one contour group-sized storage space is occupied in the storage space of the contour searching unit all the time, and compared with the process of loading the image in the contour searching unit, the occupation of the storage space of the contour searching unit can be reduced, and the storage resources of the contour searching unit can be saved.
In addition, in the process of judging whether a new profile group needs to be loaded by the profile searching unit or not when one profile point is searched, the probability of continuously searching the profile point along the current motion direction can be predicted based on the current motion direction of the profile, whether the new profile group needs to be loaded or not can be accurately judged in real time according to the predicted probability of continuously searching the profile point along the current motion direction, and when the new profile group needs to be loaded is judged, the profile group to be loaded is determined, and then the profile group to be loaded can be prefetched, the storage time required by loading the profile group is hidden, and the searching overall performance is improved.
The embodiments of the present application provide a computer program product comprising a computer program stored in a computer storage medium; the processor of the computer device reads the computer program from the computer storage medium and executes the computer program, so that the computer device executes the method embodiment shown in fig. 2 or fig. 4 described above. It is to be understood that the foregoing disclosure is only of the preferred embodiments of the present application and is not, of course, intended to limit the scope of the claims hereof, as defined by the appended claims.

Claims (15)

1. An image search method, comprising:
after determining a contour group to be loaded in a target image, acquiring a corresponding contour group and loading the corresponding contour group to a contour searching unit so as to refresh the storage space of the contour searching unit; wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from the target image;
Invoking the contour searching unit to perform contour point searching from the first contour block in the currently loaded contour group based on the N motion directions;
determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; predicting the confidence coefficient of the current motion direction at the current moment; any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion;
if the current movement direction falls into the prediction area and the confidence coefficient of the current movement direction at the current moment is greater than or equal to a threshold value, determining a contour group to be loaded in the target image;
and iteratively executing the process until the ending search event of the target image is detected, and determining the contour of the target image according to each contour point searched by the contour search unit.
2. The method of claim 1, wherein the determining the set of contours to be loaded in the target image comprises:
predicting block attribute information of a first contour block to be loaded to obtain predicted block attribute information;
determining a first contour block to be loaded and N second contour blocks corresponding to the corresponding first contour blocks in the target image according to the attribute information of the prediction blocks;
And taking the determined first contour block and the corresponding N second contour blocks as a contour group to be loaded.
3. The method of claim 2, wherein any block attribute information includes center point coordinates of a corresponding contour block, and the predicted block attribute information includes predicted center point coordinates;
the predicting the block attribute information of the first contour block to be loaded to obtain predicted block attribute information includes: and acquiring the position coordinates of the currently searched contour point to serve as the predicted center point coordinates.
4. The method of claim 2, wherein any block attribute information includes center point coordinates of a corresponding contour block, and the predicted block attribute information includes predicted center point coordinates;
the current movement direction is one of the N movement directions, each movement direction of the N movement directions is configured with an information table, and any information table comprises: at least one center point coordinate set in a corresponding movement direction;
the predicting the block attribute information of the first contour block to be loaded to obtain predicted block attribute information includes: according to the corresponding relation between the motion direction and the information table, taking the information table corresponding to the current motion direction as a target information table; and pre-fetching a central point coordinate from central point coordinates included in the target information table as a predicted central point coordinate.
5. The method of claim 1, wherein the current direction of motion is: the contour point searched last time points to the direction of the contour point searched currently;
the predicting the confidence of the current motion direction at the current moment comprises the following steps:
determining the movement direction of the previous moment and acquiring the confidence coefficient of the current movement direction at the previous moment; wherein the previous time is the time when the contour point was last searched;
and carrying out consistency check on the motion direction at the previous moment and the current motion direction, and predicting the confidence coefficient of the current motion direction at the current moment according to a consistency check result and the confidence coefficient of the current motion direction at the previous moment.
6. The method of claim 5, wherein predicting the confidence level of the current direction of motion at the current time based on the consistency check result and the confidence level of the current direction of motion at the previous time comprises:
invoking a target confidence prediction model to predict the confidence according to the position coordinates of the currently searched contour points, and obtaining a predicted value;
and calibrating the predicted value according to the consistency check result and the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment.
7. The method of claim 6, wherein calibrating the predicted value based on the consistency check result and the confidence level of the current movement direction at the previous time to obtain the confidence level of the current movement direction at the current time comprises:
if the consistency check result is a consistency check result, carrying out summation operation on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment;
and if the consistency check result is a result which does not pass the consistency check, carrying out difference value operation on the confidence coefficient of the current movement direction at the previous moment and the predicted value to obtain the confidence coefficient of the current movement direction at the current moment.
8. The method of claim 6, wherein the current direction of motion is one of the N directions of motion;
each of the N moving directions is provided with a confidence coefficient prediction model, and after each contour point is searched and the corresponding moving direction is determined, the confidence coefficient prediction model corresponding to each moving direction is used for independently predicting the confidence coefficient of the corresponding moving direction at the corresponding moment;
The target confidence prediction model refers to: a confidence prediction model corresponding to the current motion direction; the step of predicting the confidence coefficient of the current motion direction at the current moment is carried out by calling the target confidence coefficient prediction model; and the target confidence coefficient prediction model predicts the confidence coefficient of the current motion direction at the current moment in the process of predicting the confidence coefficient of the current motion direction at the current moment, and the obtained confidence coefficient of the current motion direction at the previous moment is predicted by the target confidence coefficient prediction model history.
9. The method of claim 8, wherein each of the N directions of motion is configured with a table of information, any of the tables of information being used to store: confidence predicted by the confidence prediction model corresponding to the corresponding motion direction;
in the process of predicting the confidence coefficient of the current movement direction at the current moment by the target confidence coefficient prediction model, acquiring the confidence coefficient of the current movement direction at the previous moment from an information table corresponding to the current movement direction;
the method further comprises the steps of: and refreshing an information table corresponding to the current movement direction by adopting the confidence coefficient of the current movement direction predicted by the target confidence coefficient prediction model at the current moment.
10. The method of claim 5, wherein predicting the confidence level of the current direction of motion at the current time based on the consistency check result and the confidence level of the current direction of motion at the previous time comprises:
if the consistency check result is a consistency check result, increasing the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment;
and if the consistency check result is a result which does not pass the consistency check, reducing the confidence coefficient of the current movement direction at the previous moment to obtain the confidence coefficient of the current movement direction at the current moment.
11. The method of claim 1, wherein the current direction of motion is: the contour point searched last time points to the direction of the contour point searched currently;
the predicting the confidence of the current motion direction comprises the following steps:
determining the movement direction of the previous moment and acquiring the confidence coefficient of the movement direction of the previous moment; wherein the previous time is the time when the contour point was last searched;
and carrying out consistency check on the motion direction of the previous moment and the current motion direction, and predicting the confidence coefficient of the current motion direction at the current moment according to a consistency check result and the confidence coefficient of the motion direction of the previous moment.
12. The method of claim 1, wherein the method further comprises:
if the current motion direction does not fall into the prediction area, or if the current motion direction falls into the prediction area and the confidence of the current motion direction at the current moment is smaller than a threshold, continuing to search contour points in the currently loaded contour group.
13. An image search apparatus, comprising:
the acquisition unit is used for acquiring the corresponding profile groups to be loaded to the profile searching unit after the profile groups to be loaded are determined in the target image, so as to refresh the storage space of the profile searching unit; wherein the currently loaded profile group comprises: the first contour block and N second contour blocks, wherein N is a positive integer; the N second contour blocks comprise contour blocks positioned in N adjacent areas of the first contour block, one adjacent area corresponds to one motion direction, the area where the N second contour blocks are positioned is a prediction area, and the contour blocks refer to image blocks divided from the target image;
the processing unit is used for calling the contour searching unit to search contour points based on the N moving directions from the first contour block in the currently loaded contour group;
The processing unit is further used for determining the current motion direction of the contour based on the currently searched contour point every time one contour point is searched; predicting the confidence coefficient of the current motion direction at the current moment; any confidence is used to indicate: continuing to search for probabilities of contour points along the respective directions of motion;
the processing unit is further configured to determine a profile group to be loaded in the target image if the current motion direction falls within the prediction area and the confidence level of the current motion direction at the current moment is greater than or equal to a threshold value;
and iteratively executing the process until the processing unit detects the search ending event of the target image, and determining the contour of the target image according to each contour point searched by the contour searching unit.
14. A computer device comprising an input interface and an output interface, further comprising: a processor and a computer storage medium;
wherein the processor is adapted to implement one or more instructions, the computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the image search method of any of claims 1-12.
15. A computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the image search method of any one of claims 1-12.
CN202311281761.4A 2023-09-28 2023-09-28 Image searching method, device, equipment and storage medium Active CN117290537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311281761.4A CN117290537B (en) 2023-09-28 2023-09-28 Image searching method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311281761.4A CN117290537B (en) 2023-09-28 2023-09-28 Image searching method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117290537A true CN117290537A (en) 2023-12-26
CN117290537B CN117290537B (en) 2024-06-07

Family

ID=89244093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311281761.4A Active CN117290537B (en) 2023-09-28 2023-09-28 Image searching method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117290537B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271013A1 (en) * 2006-05-18 2007-11-22 Applied Perception Inc. Vision guidance system and method for identifying the position of crop rows in a field
US20070269114A1 (en) * 2006-05-18 2007-11-22 Applied Perception Inc. Vision guidance system and method for identifying the position of crop rows in a field
CN104376535A (en) * 2014-11-04 2015-02-25 徐州工程学院 Rapid image repairing method based on sample
US20160335777A1 (en) * 2015-05-13 2016-11-17 Anja Borsdorf Method for 2D/3D Registration, Computational Apparatus, and Computer Program
CN109283562A (en) * 2018-09-27 2019-01-29 北京邮电大学 Three-dimensional vehicle localization method and device in a kind of car networking
CN109740537A (en) * 2019-01-03 2019-05-10 广州广电银通金融电子科技有限公司 The accurate mask method and system of pedestrian image attribute in crowd's video image
CN110046600A (en) * 2019-04-24 2019-07-23 北京京东尚科信息技术有限公司 Method and apparatus for human testing
CN112116639A (en) * 2020-09-08 2020-12-22 苏州浪潮智能科技有限公司 Image registration method and device, electronic equipment and storage medium
CN112183215A (en) * 2020-09-02 2021-01-05 重庆利龙科技产业(集团)有限公司 Human eye positioning method and system combining multi-feature cascade SVM and human eye template
CN112417977A (en) * 2020-10-26 2021-02-26 青岛聚好联科技有限公司 Target object searching method and terminal
CN112614161A (en) * 2020-12-28 2021-04-06 之江实验室 Three-dimensional object tracking method based on edge confidence
CN113724290A (en) * 2021-07-22 2021-11-30 西北工业大学 Multi-level template self-adaptive matching target tracking method for infrared image
CN114428878A (en) * 2022-04-06 2022-05-03 广东知得失网络科技有限公司 Trademark image retrieval method and system
CN115393734A (en) * 2022-08-30 2022-11-25 吉林大学 SAR image ship contour extraction method based on fast R-CNN and CV model combined method
CN116452616A (en) * 2023-03-22 2023-07-18 武汉光庭信息技术股份有限公司 Intersecting contour point co-edge method based on image segmentation

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271013A1 (en) * 2006-05-18 2007-11-22 Applied Perception Inc. Vision guidance system and method for identifying the position of crop rows in a field
US20070269114A1 (en) * 2006-05-18 2007-11-22 Applied Perception Inc. Vision guidance system and method for identifying the position of crop rows in a field
CN104376535A (en) * 2014-11-04 2015-02-25 徐州工程学院 Rapid image repairing method based on sample
US20160335777A1 (en) * 2015-05-13 2016-11-17 Anja Borsdorf Method for 2D/3D Registration, Computational Apparatus, and Computer Program
CN109283562A (en) * 2018-09-27 2019-01-29 北京邮电大学 Three-dimensional vehicle localization method and device in a kind of car networking
CN109740537A (en) * 2019-01-03 2019-05-10 广州广电银通金融电子科技有限公司 The accurate mask method and system of pedestrian image attribute in crowd's video image
CN110046600A (en) * 2019-04-24 2019-07-23 北京京东尚科信息技术有限公司 Method and apparatus for human testing
CN112183215A (en) * 2020-09-02 2021-01-05 重庆利龙科技产业(集团)有限公司 Human eye positioning method and system combining multi-feature cascade SVM and human eye template
CN112116639A (en) * 2020-09-08 2020-12-22 苏州浪潮智能科技有限公司 Image registration method and device, electronic equipment and storage medium
US20230252664A1 (en) * 2020-09-08 2023-08-10 Inspur Suzhou Intelligent Technology Co., Ltd. Image Registration Method and Apparatus, Electronic Apparatus, and Storage Medium
CN112417977A (en) * 2020-10-26 2021-02-26 青岛聚好联科技有限公司 Target object searching method and terminal
CN112614161A (en) * 2020-12-28 2021-04-06 之江实验室 Three-dimensional object tracking method based on edge confidence
CN113724290A (en) * 2021-07-22 2021-11-30 西北工业大学 Multi-level template self-adaptive matching target tracking method for infrared image
CN114428878A (en) * 2022-04-06 2022-05-03 广东知得失网络科技有限公司 Trademark image retrieval method and system
CN115393734A (en) * 2022-08-30 2022-11-25 吉林大学 SAR image ship contour extraction method based on fast R-CNN and CV model combined method
CN116452616A (en) * 2023-03-22 2023-07-18 武汉光庭信息技术股份有限公司 Intersecting contour point co-edge method based on image segmentation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MARLON C. MONCORES ETAL.: "LARGE NEIGHBORHOOD SEARCH APPLIED TO THE SOFTWARE MODULE CLUSTERING PROBLEM", COMPUTER & OPERATIONS RESEARCH, vol. 19, 31 March 2018 (2018-03-31), pages 92 - 111 *
曹之江;郝矿荣;丁永生;: "基于GVF-Snake人体轮廓提取的优化算法", 计算机工程, no. 22, 20 November 2008 (2008-11-20), pages 210 - 212 *
钟芙蓉;张福泉;: "方差约束因子耦合搜索区域判定模型的图像修复算法", 西南师范大学学报(自然科学版), no. 06, 20 June 2018 (2018-06-20), pages 140 - 147 *
骆渊;周成平;丁明跃;娄联堂;: "轮廓自动匹配方法研究", 战术导弹技术, no. 03, 15 May 2007 (2007-05-15), pages 79 - 83 *

Also Published As

Publication number Publication date
CN117290537B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN112966522B (en) Image classification method and device, electronic equipment and storage medium
US20200410273A1 (en) Target detection method and apparatus, computer-readable storage medium, and computer device
CN112232293B (en) Image processing model training method, image processing method and related equipment
CN108229343B (en) Target object key point detection method, deep learning neural network and device
TW202139183A (en) Method of detecting object based on artificial intelligence, device, equipment and computer-readable storage medium
CN110084299B (en) Target detection method and device based on multi-head fusion attention
JP2021508123A (en) Remote sensing Image recognition methods, devices, storage media and electronic devices
US9691132B2 (en) Method and apparatus for inferring facial composite
US10706322B1 (en) Semantic ordering of image text
CN110598788B (en) Target detection method, target detection device, electronic equipment and storage medium
CN110782430A (en) Small target detection method and device, electronic equipment and storage medium
CN115082667A (en) Image processing method, device, equipment and storage medium
CN113781493A (en) Image processing method, image processing apparatus, electronic device, medium, and computer program product
CN112561074A (en) Machine learning interpretable method, device and storage medium
CN117296078A (en) Optical flow techniques and systems for accurately identifying and tracking moving objects
CN117290537B (en) Image searching method, device, equipment and storage medium
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
CN115620321B (en) Table identification method and device, electronic equipment and storage medium
CN111400524A (en) AI-based variable-scale geological map text vectorization method and system
CN113780532B (en) Training method, device, equipment and storage medium of semantic segmentation network
US11281935B2 (en) 3D object detection from calibrated 2D images
CN117011416A (en) Image processing method, device, equipment, medium and program product
CN113095286A (en) Big data image processing algorithm and system
US20140050401A1 (en) Fast Image Processing for Recognition Objectives System
CN115731588B (en) Model processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant