WO2021147563A1 - 目标检测方法、装置、电子设备及计算机可读存储介质 - Google Patents

目标检测方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021147563A1
WO2021147563A1 PCT/CN2020/135967 CN2020135967W WO2021147563A1 WO 2021147563 A1 WO2021147563 A1 WO 2021147563A1 CN 2020135967 W CN2020135967 W CN 2020135967W WO 2021147563 A1 WO2021147563 A1 WO 2021147563A1
Authority
WO
WIPO (PCT)
Prior art keywords
corner point
corner
image
point
detected
Prior art date
Application number
PCT/CN2020/135967
Other languages
English (en)
French (fr)
Chinese (zh)
Inventor
王飞
钱晨
Original Assignee
上海商汤临港智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤临港智能科技有限公司 filed Critical 上海商汤临港智能科技有限公司
Priority to JP2021557733A priority Critical patent/JP2022526548A/ja
Priority to KR1020217030884A priority patent/KR20210129189A/ko
Publication of WO2021147563A1 publication Critical patent/WO2021147563A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the embodiments of the present disclosure relate to the field of image recognition technology, and in particular, to a target detection method, device, electronic equipment, and computer-readable storage medium.
  • Target detection is an important basic problem of computer vision. Many computer vision applications rely on target detection, such as autonomous driving, video surveillance, and mobile entertainment.
  • the main task is to use the detection frame to mark the location of the object in the image.
  • This process can determine the location of the object in the image based on the target detection algorithm of the key point of the object, and determine all the key points of the object in the image Then, the key points of the objects belonging to the same object are matched to obtain the detection frame of the object.
  • the matching degree between the key points of the objects corresponding to the similar objects is high, which is easy to cause wrong detection results.
  • the detection result is that the same detection frame contains Multiple objects, therefore, the detection accuracy of current target detection methods is low.
  • the embodiments of the present disclosure provide at least one target detection solution.
  • embodiments of the present disclosure provide a target detection method, including:
  • the target object in the image to be detected is determined based on the corner position information of each corner point in the image to be detected and the heart offset tensor corresponding to each corner point.
  • the corner points in the image to be detected can characterize the position of each target object in the image to be detected.
  • the corner points can include the upper left corner point and the lower right corner point, where the upper left corner point refers to the corresponding The intersection of the straight line of the upper contour of the target object and the straight line corresponding to the left contour of the target object.
  • the lower right corner point refers to the intersection of the straight line corresponding to the lower contour of the target object and the straight line corresponding to the right contour of the target object.
  • the positions pointed by the centripetal offset tensors corresponding to the upper left corner points and the lower right corner points should be relatively close. Therefore, the target detection method proposed in the embodiments of the present disclosure is based on The corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point can determine the corner point belonging to the same target object, and then the same target can be detected based on the determined corner point Object.
  • the determining, based on the to-be-detected image, the corner position information of each corner point in the to-be-detected image and the centripetal offset tensor corresponding to each corner point includes:
  • the corner point position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point are determined.
  • the method provided by the embodiment of the present disclosure obtains an initial feature map by extracting features from the image to be detected, and performs corner pooling processing on the initial feature map, so as to obtain a convenient extraction of the corner points and the centripetal offset corresponding to the corner points.
  • Feature map that is, the feature map after corner point pooling.
  • the determining the corner position information of each corner point in the image to be detected based on the feature map after the corner point pooling includes:
  • the local offset information is used to indicate that the real physical point represented by the corresponding corner point is at Position offset information in the corner point heat map;
  • the local offset information corresponding to each corner point, and the size ratio between the corner point heat map and the image to be detected determine each The position information of the corner point in the image to be detected.
  • the embodiment of the present disclosure provides a method for determining the position information of each corner point in the image to be detected.
  • This process introduces a corner point heat map, and determines that the corner point can be used as a corner point by the probability value of each feature point as a corner point.
  • the feature point of the point after the corner point is selected, the position information of the corner point in the corner point heat map is corrected to determine the corner point position information of the corner point in the image to be detected.
  • This method can obtain accuracy
  • the corner position information of the higher corner point facilitates subsequent detection of the position of the target object in the image to be detected based on the corner point.
  • the determining the centripetal offset tensor corresponding to each corner point based on the feature map after the corner point pooling includes:
  • the steering offset tensor corresponding to each feature point in the corner point pooling feature map is determined, and the steering offset tensor corresponding to each feature point is represented by The offset tensor of the feature point pointing to the center point of the target object in the image to be detected;
  • the offset domain information includes multiple initial feature points associated with the feature point respectively pointing to their corresponding offsets The offset tensor of the feature point after the shift;
  • the feature data of the feature points in the corner point pooled feature map Make adjustments to obtain the adjusted feature map
  • centripetal offset tensor corresponding to each corner point is determined.
  • the process of determining the centripetal offset tensor considers the target object information, such as introducing the steering offset tensor corresponding to the corner point, and the offset domain information of the feature point.
  • the feature data of the feature points in the feature map are adjusted so that the feature data of the feature points in the adjusted feature map can contain richer target object information, so that a more accurate orientation corresponding to each corner point can be determined.
  • the central offset tensor, through accurate centripetal offset tensor can accurately obtain the position information of the center point pointed by the corner point, so as to accurately detect the position of the target object in the image to be detected.
  • the corner heat map corresponding to the image to be detected includes a corner heat map corresponding to multiple channels, and each channel of the multiple channels corresponds to a preset object category; After determining the probability value of each feature point in the corner heat map as a corner point based on the corner heat map, the detection method further includes:
  • the corner point exists in the corner point heat map corresponding to the channel, it is determined that the image to be detected contains the target object of the preset object category corresponding to the channel.
  • the corner heat map containing the preset number of channels can be obtained, and the corner heat map corresponding to each channel can be obtained. Whether there is a corner point in the image, it can be determined whether there is a target object corresponding to the channel in the image to be detected.
  • the target object in the image to be detected is determined based on the position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point ,include:
  • the detection frame of the target object in the image to be detected is determined.
  • the method provided by the embodiments of the present disclosure can determine the detection frame of each target object based on the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point.
  • the position information of the target object in the image to be detected can be determined.
  • the target object in the image to be detected is determined based on the position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point
  • the detection box includes:
  • the detection frame of the target object is determined in the candidate detection frame based on the position information of the center point pointed to by each corner point in each candidate corner point pair and the center area information corresponding to the candidate corner point pair.
  • the corner point position information of the corner points is used to first determine the candidate corner point pairs that can constitute the candidate detection frame, and then based on each corner point in the candidate corner point pair.
  • the corresponding centripetal offset tensor is used to determine whether the target object surrounded by the candidate detection frame is the same target object, so that the detection frame of all target objects in the image to be detected can be detected more accurately.
  • the determining the center region information corresponding to the candidate corner point pair based on the corner point position information of each corner point in the candidate corner point pair in the image to be detected includes :
  • the coordinate range of the central area frame corresponding to the candidate corner point pair is determined.
  • Determining the detection frame of the target object includes:
  • the central area information corresponding to the valid candidate corner point pair, and the probability corresponding to each corner point in the valid candidate corner point pair Value determine the score of the candidate detection frame corresponding to each valid candidate corner point; the probability value corresponding to each corner point is used to indicate the probability value of the corresponding feature point of the corner point in the corner heat map as the corner point ;
  • the detection frame of the target object is determined in the candidate detection frame.
  • the method provided by the embodiment of the present disclosure effectively screens the candidate corner points that constitute the candidate detection frame, and determines that the candidate detection frame that only represents one target object can be screened out, and then performs detection on these candidate detection frames that only represent one target object.
  • Soft non-maximum suppression screening so as to obtain an accurate detection frame that characterizes the target object.
  • the target detection method further includes:
  • the instance information of the target object in the image to be detected is determined based on the detection frame of the target object and the initial feature map obtained by feature extraction of the image to be detected.
  • the method provided by the embodiments of the present disclosure can determine the instance information of the target object.
  • the instance here means that after the instance segmentation of the target object in the image, the pixel of each target object is given at the pixel level, and the instance segmentation can be accurate to the object. To obtain more accurate position information of the target object in the image to be detected.
  • the determining the instance information of the target object in the image to be detected is based on the detection frame of the target object and the initial feature map obtained by feature extraction of the image to be detected, include:
  • the instance information of the target object in the image to be detected is determined.
  • the target detection method is implemented by a neural network, and the neural network is obtained by training using sample pictures containing labeled target sample objects.
  • the neural network is obtained by training using the following steps:
  • the network parameter value of the neural network is adjusted based on the predicted target sample object in the sample image and the labeled target sample object in the sample image.
  • the neural network training method obtains a sample image, and based on the sample image, determines the position information of each sample corner point in the sample image, and the centripetal offset corresponding to each sample corner point. Based on the corner position information of each sample corner point in the sample image and the centripetal offset tensor corresponding to each sample corner point, the target sample object is detected in the sample image, because the sample corner point refers to the main Feature points, such as sample corner points may include upper left sample corner points and lower right sample corner points, where the upper left sample corner point refers to the intersection of a line corresponding to the upper contour of the target sample object and a line corresponding to the left contour of the target sample object.
  • the lower right sample corner point refers to the intersection of the straight line corresponding to the lower contour of the target sample object and the straight line corresponding to the right contour of the target sample object.
  • the upper left sample corner and the lower right sample corner belong to the detection frame of the same target sample object.
  • the centripetal offset tensor corresponding to the upper left sample corner point and the lower right sample corner point should be relatively close to each other.
  • the neural network training method proposed in the embodiment of the present disclosure is based on characterizing the target sample object in the sample
  • the corner position information of the corner points of the position in the image, and the centripetal offset tensor corresponding to each sample corner point determine the sample corner points belonging to the same target sample object, and then based on the determined sample corner points can be detected
  • the same target sample object is extracted, and then the neural network parameters are continuously adjusted based on the target object in the sample image, so as to obtain a neural network with higher accuracy.
  • the target object can be Perform accurate detection.
  • a target detection device including:
  • the obtaining part is configured to obtain the image to be detected
  • the determining part is configured to determine, based on the image to be detected, the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point, where the corner points represent the image to be detected.
  • the detection part is configured to determine the target object in the image to be detected based on the position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point.
  • the determining part is configured to:
  • the corner point position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point are determined.
  • the method when the determining part is configured to determine the corner position information of each corner point in the image to be detected based on the feature map after the corner point pooling, the method includes :
  • the local offset information is used to indicate that the real physical point represented by the corresponding corner point is at Position offset information in the corner point heat map;
  • the local offset information corresponding to each corner point, and the size ratio between the corner point heat map and the image to be detected determine The position information of each corner point in the image to be detected.
  • the method includes:
  • the steering offset tensor corresponding to each feature point in the corner point pooling feature map is determined, and the steering offset tensor corresponding to each feature point is represented by The offset tensor of the feature point pointing to the center point of the target object in the image to be detected;
  • the offset domain information includes multiple initial feature points associated with the feature point respectively pointing to their corresponding offsets The offset tensor of the feature point after the shift;
  • the feature data of the feature points in the corner point pooled feature map Make adjustments to obtain the adjusted feature map
  • centripetal offset tensor corresponding to each corner point is determined.
  • the corner heat map corresponding to the image to be detected includes a corner heat map corresponding to multiple channels, and each channel of the multiple channels corresponds to a preset object category;
  • the determining part is configured to determine the probability value of each feature point in the corner heat map as a corner point based on the corner heat map, it is further configured to:
  • the corner point exists in the corner point heat map corresponding to the channel, it is determined that the image to be detected contains the target object of the preset object category corresponding to the channel.
  • the detection part is configured to:
  • the detection frame of the target object in the image to be detected is determined.
  • the detection part is configured to determine the to-be-detected image based on the position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point.
  • the detection frame of the target object in the image it includes:
  • the detection frame of the target object is determined in the candidate detection frame based on the position information of the center point pointed to by each corner point in each candidate corner point pair and the center area information corresponding to the candidate corner point pair.
  • the detection part is configured to determine the candidate corner point pair based on the corner point position information of each corner point of each candidate corner point pair in the image to be detected.
  • the corresponding central area information includes:
  • the coordinate range of the central area frame corresponding to the candidate corner point pair is determined.
  • the detection part is configured to be based on the position information of the center point pointed to by each corner point in each candidate corner point pair, and the center area information corresponding to the candidate corner point pair,
  • the method includes:
  • the central area information corresponding to the valid candidate corner point pair, and the probability corresponding to each corner point in the valid candidate corner point pair Value determine the score of the candidate detection frame corresponding to each valid candidate corner point; the probability value corresponding to each corner point is used to indicate the probability value of the corresponding feature point of the corner point in the corner heat map as the corner point ;
  • the detection frame of the target object is determined in the candidate detection frame.
  • the detection part is further configured to:
  • the target object in the image to be detected is determined based on the detection frame of the target object and the initial feature map obtained by feature extraction of the image to be detected Instance information.
  • the detection part is configured to determine the image in the image to be detected based on the detection frame of the target object and the initial feature map obtained by feature extraction of the image to be detected In the case of the instance information of the target object, it includes:
  • the instance information of the target object in the image to be detected is determined.
  • the target detection device further includes a neural network training part, and the neural network training part is configured to:
  • Training a neural network for target detection the neural network is obtained by training using sample pictures containing labeled target sample objects.
  • the neural network training part is configured to train the neural network according to the following steps:
  • the network parameter value of the neural network is adjusted based on the predicted target sample object in the sample image and the labeled target sample object in the sample image.
  • an embodiment of the present disclosure provides an electronic device, including a processor, a memory, and a bus.
  • the memory stores machine-readable instructions executable by the processor.
  • the electronic device is running, the The processor and the memory communicate through a bus, and when the machine-readable instructions are executed by the processor, the steps of the target detection method as described in the first aspect are executed.
  • embodiments of the present disclosure provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and the computer program executes the target detection method as described in the first aspect when the computer program is run by a processor. step.
  • the embodiments of the present disclosure provide a computer program, including computer-readable code.
  • the processor in the electronic device executes the following On the one hand, the steps of the target detection method.
  • Figure 1 shows a schematic diagram of a result obtained when detecting an image to be detected
  • Fig. 2 shows a flow chart of an exemplary target detection method provided by an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a process for determining the position information of corner points and the centripetal offset tensor corresponding to the corner points provided by an embodiment of the present disclosure
  • FIG. 4 shows a flowchart for determining the position information of a corner point and the centripetal offset tensor corresponding to the corner point provided by an embodiment of the present disclosure
  • FIG. 5 shows a flow chart of determining the centripetal offset tensor corresponding to a corner point provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic flow chart of an exemplary feature adjustment network provided by an embodiment of the present disclosure for adjusting a feature map after corner pooling
  • FIG. 7 shows a schematic diagram of a process for determining the category of a target object provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic flow chart of determining a detection frame of a target object provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic flowchart of determining a detection frame of a target object based on each candidate corner point pair provided by an embodiment of the present disclosure
  • FIG. 10 shows a schematic flowchart corresponding to an exemplary target detection method provided by an embodiment of the present disclosure
  • FIG. 11 shows a schematic flowchart of a neural network training method provided by an embodiment of the present disclosure
  • FIG. 12 shows a schematic structural diagram of a target detection device provided by an embodiment of the present disclosure
  • FIG. 13 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure provides a target detection method, which can improve the accuracy of the detection result.
  • the embodiments of the present disclosure provide a target detection method. After acquiring the image to be detected, first determine the corner position information of each corner point in the image to be detected and the centripetal offset corresponding to each corner point. Because the corner point refers to the main feature point in the image, the position information of the corner point in the image to be detected can characterize the position of each target object in the image to be detected. For example, the corner point can include the upper left corner and the lower right corner.
  • the target detection method proposed in the embodiment can determine the corner points belonging to the same target object based on the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point.
  • the corner point of can detect the same target object.
  • the execution subject of the target detection method provided in the embodiment of the present disclosure is generally a computer device with a certain computing capability.
  • the equipment includes, for example, terminal equipment or servers or other processing equipment.
  • the target detection method can be implemented by a processor invoking a computer-readable instruction stored in a memory.
  • the method includes steps S201 to S203, and the steps are as follows:
  • the image to be detected here can be an image to be detected in a specific environment.
  • a camera can be installed at the traffic intersection, and the video stream of the traffic intersection in a certain period of time can be collected by the camera, and then Frame the video stream to obtain the image to be detected; or to detect animals in a zoo, a camera can be installed in the zoo, and the video stream of the zoo in a certain period of time can be collected by the camera, and then the video The stream undergoes framing processing to obtain the image to be detected.
  • the image to be detected can contain the target object.
  • the target object here refers to the object to be detected in a specific environment, such as a vehicle at a traffic intersection mentioned above, and an animal in a zoo, or it may not contain the target.
  • Object if the target object is not included, the detection result is empty, and the implementation of the present disclosure will describe the image to be detected that contains the target object.
  • S202 Based on the image to be detected, determine the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point, where the corner point represents the position of the target object in the image to be detected.
  • the position of the target object in the image to be detected can be represented by a detection frame.
  • the embodiment of the present disclosure uses corner points to characterize the position of the target object in the image to be detected, that is, the corner points here may be the corner points of the detection frame, for example,
  • the position of the target object in the image to be detected is characterized by the upper left corner point and the lower right corner point.
  • the upper left corner point is the upper left corner point of the detection frame
  • the lower right corner point is the lower right corner point of the detection frame, where the upper left corner point is Refers to the intersection of the line corresponding to the upper contour of the target object and the line corresponding to the left contour of the target object.
  • the lower right corner point refers to the intersection of the line corresponding to the lower contour of the target object and the line corresponding to the right contour of the target object.
  • the position of the target object is not limited to the upper left corner point and the lower right corner point.
  • the position of the target object can also be characterized by the upper right corner point and the lower left corner point.
  • the embodiment of the present disclosure uses the upper left corner point and the lower right corner point. Take an example for illustration.
  • the centripetal offset tensor here refers to the offset tensor from the corner point to the center position of the target object. Because the image to be detected is a two-dimensional image, the centripetal offset tensor here includes the offset in two directions. When the two directions are the X-axis direction and the Y-axis direction, the centripetal offset tensor includes the offset value in the X-axis direction and the offset value in the Y-axis direction. Through the centripetal offset tensor corresponding to the corner point and the corner point, the center position of the corner point can be determined.
  • the center position of the point should be the same , Or relatively close, so the corner points belonging to the same target object can be determined based on the centripetal offset tensor corresponding to each corner point, and then the detection frame of the target object can be determined based on the determined corner points.
  • the embodiment of the present disclosure uses a neural network to determine the corner point and the centripetal offset tensor corresponding to the corner point, which will be described in conjunction with the following embodiments.
  • S203 Determine a target object in the image to be detected based on the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point.
  • the corner position information of each corner in the image to be detected refers to the corner position information of each of the multiple corner points in the image to be detected
  • the centripetal offset tensor corresponding to each corner refers to the Each of the multiple corner points corresponds to the centripetal offset tensor.
  • the detection of the target object in the image to be detected can include the location of the detected target object, such as determining the detection frame of the target object in the image to be detected, or determining the instance information of the target object in the image to be detected, or at the same time determining the target object to be detected.
  • the detection frame and instance information of the target object in the image, and how to determine the target object in the image to be detected will be explained in detail later.
  • the target detection method proposed in the above steps S201 to S203 after acquiring the image to be detected, first determine the corner position information of each corner point in the image to be detected, and the centripetal offset tensor corresponding to each corner point, because the angle Point refers to the main feature point in the image.
  • the position information of the corner point in the image to be detected can characterize the position of each target object in the image to be detected.
  • the corner point can include the upper left corner point and the lower right corner point, where the upper left corner The corner point refers to the intersection point of the line corresponding to the upper contour of the target object and the line corresponding to the left contour of the target object.
  • the lower right corner point refers to the intersection point of the line corresponding to the lower contour of the target object and the line corresponding to the right contour of the target object.
  • the positions of the centripetal offset tensor corresponding to the upper left corner point and the lower right corner point should be relatively close. Therefore, the embodiment of the present disclosure proposes
  • the target detection method can determine the corner points belonging to the same target object based on the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point, and then based on the determined corner point The same target object is detected.
  • the corner position information of the corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point are determined, as shown in FIG. 3 It may include the following steps S301 to S303:
  • S301 Perform feature extraction on the image to be detected to obtain an initial feature map corresponding to the image to be detected;
  • S301 Perform corner pooling processing on the initial feature map to obtain a feature map after corner pooling
  • S303 Based on the feature map after corner pooling, determine the corner position information of each corner in the image to be detected, and the centripetal offset tensor corresponding to each corner.
  • the size of the image to be detected is fixed, for example, the size is H*W, where H and W represent the pixel values in the length and width directions of the image to be detected respectively, and then input the image to be detected into the pre-trained hourglass convolution
  • the neural network performs feature extraction, such as texture feature extraction, color feature extraction, edge feature extraction, etc., and the initial feature map corresponding to the image to be detected can be obtained.
  • the input end of the hourglass convolutional neural network has requirements for the received image size, that is, it receives a set size of the image to be inspected. If the size of the image to be inspected does not meet the set size, it needs to be treated first. Adjust the size of the detected image, and then input the adjusted image to be detected into the hourglass convolutional neural network for feature extraction and size compression, that is, an initial feature map with a size of h*w*c can be obtained, where c Indicates the number of channels of the initial feature map, and h and w represent the size of the initial feature map on each channel.
  • the initial feature map contains multiple feature points, and each feature point has feature data. These feature data can represent the global information of the image to be detected.
  • the embodiment of the present disclosure proposes to The graph performs corner pooling processing to obtain the feature map after corner pooling. Compared with the initial feature map, the feature map after corner pooling enhances the semantic information of the target object contained in the corner points, so it is based on The feature map after corner pooling can more accurately determine the corner position information of each corner point in the image to be detected, and the centripetal offset tensor corresponding to each corner point.
  • an initial feature map is obtained, and corner pooling is performed on the initial feature map to obtain a feature map that can facilitate the extraction of the corner points and the centripetal offset corresponding to the corner points, that is, the corner points Feature map after pooling.
  • the corner point pooling feature map and the pre-trained neural network can be used to determine whether there is a corner point. If there is a corner point, determine whether each corner point is The position information of the corner points in the image to be detected.
  • the position of the target object in the image to be detected is characterized by the upper left corner point and the lower right corner point, that is, the corner point position information of each corner point in the image to be detected is determined
  • the process can be the process of determining the corner position information of the upper left corner point in the image to be detected, and the process of determining the corner position information of the lower right corner point in the image to be detected, where the upper left corner can be detected through the upper left corner point detection network
  • the corner position information of the point in the image to be detected, and the corner position information of the lower right corner point in the image to be detected through the lower right corner point detection network because the corner position information of the upper left corner point in the image to be detected and
  • the method for determining the corner position information of the lower right corner point in the image to be detected is similar, and the embodiment of the present disclosure determines the corner position information of the upper left corner point in the image to be detected as an example for detailed description.
  • the upper left corner point detection network may include the upper left corner point heat map prediction network and the upper left corner point local offset prediction network. Based on the feature map after corner point pooling, it is determined that each corner point is in the image to be detected. In the case of the corner position information in, as shown in FIG. 4, the following steps S401 to S404 may be included:
  • the corner point heat map here can be obtained by the upper left corner point heat map prediction network in the upper left corner point detection network, and the corner point pooled
  • the feature map is input into the upper left corner point heat map prediction network, and the upper left corner point heat map corresponding to the image to be detected can be obtained.
  • the upper left corner point heat map contains multiple feature points, and each feature point has the feature data corresponding to the feature point.
  • the probability value of the feature point as the upper left corner point can be determined.
  • the upper left corner heat map can be used to determine the upper left corner point in the image to be detected, and can also be used to determine that the upper left corner point is in the image to be detected.
  • the category of the target object represented in the image, and the process of how to determine the category of the target object will be explained in detail later.
  • the probability that the feature point is the upper left corner point can be determined, so that the feature point with the probability value greater than the set threshold is taken as the upper left corner point.
  • S403 Obtain the position information of the selected corner points in the corner point heat map and the local offset information corresponding to each corner point.
  • the local offset information is used to indicate the position offset information of the real physical point represented by the corresponding corner point in the corner heat map.
  • the local offset information corresponding to each upper left corner point is used to indicate the position of the upper left corner point.
  • the local offset information here can be represented by a local offset tensor.
  • the local offset tensor can also represent the offset values in two directions in the upper left corner heat map, such as the coordinates in the upper left corner heat map
  • the system includes two directions, namely the x-axis direction and the y-axis direction.
  • the local offset tensor includes the offset value in the x-axis direction and the offset value in the y-axis direction.
  • the position information of each feature point in the upper left corner point heat map in the upper left corner point heat map can be obtained, taking into account the obtained position information of the upper left corner point and the upper left corner point There may be errors between the position information of the real physical points represented.
  • the position information of a certain upper left corner point can be obtained by detecting the position of the upper left corner point heat map, and the position information of the real physical point represented by the upper left corner point is different from the position information of the real physical point.
  • the detected position information of the upper left corner point should have a certain deviation, and the local offset information is used to indicate the deviation.
  • the acquired position information of each upper left corner point in the upper left corner point heat map may include the coordinate value x in the x-axis direction in the upper left corner point heat map, and the coordinate value y in the y-axis direction, to be detected
  • the corner position information of each upper left corner point in the image may include the coordinate value X in the X-axis direction and the coordinate value Y in the Y-axis direction.
  • the corner position information of the i-th upper left corner point in the image to be detected can be determined according to the following formula (1) and formula (2):
  • tl x(i) represents the coordinate value of the i-th upper left corner point in the X-axis direction of the image to be detected
  • tl y(i) represents the coordinate value of the i-th upper left corner point in the Y-axis direction of the image to be detected Value
  • n represents the size ratio between the upper left corner point heat map and the image to be detected
  • x l(i) represents the coordinate value of the i-th upper left corner point in the x-axis direction of the upper left corner point heat map
  • y l(i ) Represents the coordinate value of the i-th upper left corner point in the y-axis direction of the corner heat map
  • ⁇ lx(i) represents the real physical point represented by the i-th upper left corner point in the x-axis direction of the corner heat map
  • the offset value of ⁇ ly(i) represents the offset value of the real physical point represented by the i-th upper left corner point in the y-axis
  • the above process is the process of determining the corner position information of the upper left corner point in the image to be detected, and the process of determining the corner point position information of the lower right corner point in the image to be detected is the same, that is, the feature map after the corner point pooling is input to the right
  • the lower right corner point heat map prediction network in the lower corner point prediction network obtains the lower right corner point heat map, and then determines the probability value of each feature point in the lower right corner point heat map as the lower right corner point, selects the lower right corner point from it, and combines them at the same time
  • the position information of the corner point of each lower right corner point in the image to be detected is determined by the local offset information corresponding to the lower right corner point determined by the lower right corner point local offset network in the lower right corner point prediction network, which will not be repeated here.
  • the corner position information of the j-th lower right corner point in the image to be detected can be determined according to the following formula (3) and formula (4):
  • br x(j) n*(x r(j) + ⁇ rx(j) ); (3);
  • br x(j) represents the coordinate value of the j-th lower right corner point in the X-axis direction of the image to be detected
  • br y(j) represents the coordinate value of the j-th lower right corner point in the Y-axis direction of the image to be detected Value
  • n represents the size ratio between the lower right corner point heat map and the image to be detected
  • x r(j) represents the coordinate value of the j-th lower right corner point in the x-axis direction of the lower right corner point heat map
  • y r(j ) Represents the coordinate value of the j-th lower right corner point in the y-axis direction of the corner heat map
  • ⁇ rx(j) represents the real physical point represented by the j-th lower right corner point in the x-axis direction of the corner heat map
  • the offset value of ⁇ ry(j) represents the offset value of the real physical point represented by the j-th lower right corner point in the y-axis direction of the corner
  • the above steps S401 to S404 are a way to determine the corner position information of each corner in the image to be detected according to the embodiment of the present disclosure.
  • This process introduces a corner heat map and passes each feature point as the probability of a corner point. The value determines the feature point that can be used as the corner point.
  • the position information of the corner point in the corner point heat map is corrected to determine the corner point position information of the corner point in the image to be detected.
  • This method can obtain the corner point position information of the corner point with higher accuracy, thereby facilitating the subsequent detection of the position of the target object in the image to be detected based on the corner point.
  • centripetal offset tensor corresponding to each corner point determines the centripetal offset tensor corresponding to the upper left corner point as an example for detailed description, the centripetal offset tensor corresponding to the lower right corner point and the upper left corner point.
  • the method for determining the corresponding centripetal offset tensor is similar, and will not be repeated in the embodiment of the present disclosure.
  • a feature adjustment process is introduced to adjust the feature map after corner pooling, and then the direction is determined.
  • the central offset tensor where, in the case of determining the centripetal offset tensor corresponding to each corner point based on the feature map after the corner point pooling, as shown in FIG. 5, the following steps S501 to S504 may be included:
  • S501 Determine a steering offset tensor corresponding to each feature point in the feature map after corner point pooling based on the feature map after corner point pooling.
  • the steering offset tensor corresponding to each feature point represents the offset tensor from the feature point to the center point of the target object in the image to be detected.
  • the position of the target object in the image to be detected is related to the target object information, that is, it is hoped that the feature data of the corner points of the feature map after the corner point pooling can contain richer target object information, so each feature point can be considered here.
  • the corner point feature vector can contain richer target object information, so based on the steering offset tensor corresponding to each feature point, the feature map after the corner point pooling can be adjusted to make the adjusted feature map
  • Each feature point, especially the corner point can contain richer target object information.
  • the corner point pooled feature map can be convolved to obtain the steering offset tensor corresponding to each feature point in the corner point pooled feature map.
  • the steering offset tensor includes the direction along x The offset value in the axis direction and the offset value along the y-axis direction.
  • the convolution operation is performed on the feature map after the corner point pooling, and the feature point is mainly obtained as the steering offset tensor corresponding to the upper left corner point.
  • S502 Determine the offset domain information of each feature point based on the steering offset tensor corresponding to each feature point.
  • the offset domain information includes a plurality of initial feature points associated with the feature point and respectively point to the offset tensors of the respective offset feature points.
  • a convolution operation is performed based on the steering offset tensor corresponding to each feature point to obtain the offset domain information of the feature point.
  • centripetal offset tensor corresponding to the upper left corner point As an example, after obtaining the corresponding steering offset tensor with each feature point as the upper left corner point, then use each feature point as the upper left corner point.
  • the corresponding steering offset tensor is subjected to convolution operation to obtain the offset domain information when the feature point is used as the upper left corner point.
  • S503 Based on the feature map after corner point pooling and the offset domain information of the feature points in the feature map after corner point pooling, adjust the feature data of the feature points in the feature map after corner point pooling , Get the adjusted feature map.
  • the feature map after the corner point pooling After the feature point of the feature map after corner point pooling is obtained as the offset domain information in the case of the upper left corner point, the feature map after the corner point pooling can be pooled, and the feature map after the corner point pooling Each feature point of is used as the offset domain information of the upper left corner point, and the deformable convolution operation is performed at the same time to obtain the adjusted feature map corresponding to the upper left corner point.
  • steps S501 to S503 can be determined through the feature adjustment network as shown in FIG. 6:
  • the tensor performs convolution operation to obtain offset domain information.
  • the offset domain information here is explained as follows:
  • the feature map after corner pooling can be used to include feature point A.
  • the feature data of the 9 initial feature points represented by the solid line frame are obtained by convolution operation. After considering the offset domain information, it is hoped that the feature point A can be adjusted by the feature data containing more abundant target object information. For example, the feature points used for feature adjustment of feature point A can be offset based on the steering offset vector corresponding to each feature point.
  • the offset feature points can be pooled by corner points as shown in Figure 6.
  • the latter feature map is represented by the 9 dashed boxes, so that the feature data of the 9 offset feature points can be used to perform the convolution operation, and the feature data of feature point A can be adjusted.
  • the offset domain information can be passed
  • the offset tensor in Figure 6 is represented.
  • Each offset tensor in the offset tensor is the offset tensor of each initial feature point pointing to the offset feature point corresponding to the initial feature point, representing the initial feature After the point is offset in the x-axis direction and the y direction, the offset feature point corresponding to the initial feature point is obtained.
  • centripetal offset tensor corresponding to each upper left corner point, a more accurate centripetal offset tensor can be obtained.
  • the feature point after feature adjustment contains richer target object information, which is convenient for the subsequent adjustment based on the target object information.
  • centripetal offset tensor corresponding to each lower right corner of the feature map
  • a convolution operation is performed on the feature data corresponding to the corner points in the adjusted feature map, and the centripetal offset tensor corresponding to each corner point is determined.
  • the adjusted feature map may include the adjusted feature map corresponding to the upper left corner point, and the adjusted feature map corresponding to the lower right corner point, and each upper left corner is determined based on the adjusted feature map corresponding to the upper left corner point
  • the centripetal offset tensor corresponding to the point it can be determined by the centripetal offset prediction network corresponding to the upper left corner point.
  • the centripetal offset prediction network corresponding to the lower right corner point determine the direction corresponding to each lower right corner point.
  • the centripetal offset prediction network corresponding to the lower right corner point In the case of the heart offset tensor, it can be determined by the centripetal offset prediction network corresponding to the lower right corner point.
  • the above process of S501 to S504 is the process of determining the centripetal offset tensor provided by the embodiments of the present disclosure, by considering the target object information, such as introducing the steering offset tensor corresponding to the corner point, and the offset domain information of the feature point , Adjust the feature data of the feature points in the feature map after the corner point pooling, so that the feature data of the feature points in the adjusted feature map can contain richer target object information, so that each A more accurate centripetal offset tensor corresponding to each corner point.
  • the position information of the center point pointed by the corner point can be accurately obtained, so as to accurately detect the position of the target object in the image to be detected .
  • the category of the target object contained in the image to be detected can be determined through the corner heat map.
  • the corner heat map is how to determine the category of the target object based on the corner heat map. From the above, we know the corner heat of the image to be detected.
  • the figure includes the corner heat maps corresponding to multiple channels, and each channel corresponds to a preset object category; in the above-mentioned corner heat map, each feature point in the corner heat map is determined as a corner point
  • the detection method provided by the embodiment of the present disclosure further includes the following steps S701 to S702:
  • S701 For each channel of the multiple channels, determine whether there is a corner point in the corner heat map corresponding to the channel based on the probability value of each feature point as the corner point in the corner heat map corresponding to the channel.
  • the probability value of each feature point in the corner heat map corresponding to each channel as a corner point can be determined whether there is a corner point in the corner heat map of the channel, for example,
  • the corner feature map of a channel contains multiple feature points with a corresponding probability value greater than the set threshold, it means that the corner feature map of the channel contains corner points with a high probability, and the corner points are used to represent the target object
  • the position in the image to be detected so that it can be explained that the image to be detected contains the target object of the preset object category corresponding to the channel.
  • the number of channels to 100, that is, the obtained corner heat map is h*w*100, and each channel corresponds to a preset object category, for a certain type of object to be detected Image, among the 100 channels of the corner heat map corresponding to the image to be detected, only the corner heat maps in the first and second channels contain corner points, and the first channel corresponds to the pre- Assuming that the object category is 01, and the preset object category corresponding to the second channel is 02, it can be explained that the image to be detected contains target objects of the categories 01 and 02.
  • the embodiment of the present disclosure proposes that by inputting the feature map after the corner point pooling into the corner heat map prediction network, the corner heat map containing the preset number of channels can be obtained, and whether the corner heat map corresponding to each channel is There are corner points, and then it can be determined whether there is a target object corresponding to the channel in the image to be detected.
  • the centripetal offset tensor corresponding to the corner point can be determined, so as to determine the position of the target object corresponding to each channel in the image to be detected.
  • the category of each target object in the image to be detected in combination with the category of the target object corresponding to the channel.
  • the detection frame of the target object in the image to be detected is determined.
  • the embodiment of the present disclosure uses an upper left corner point and a lower right corner point to determine the detection frame as an example for description.
  • the upper left corner and the lower right corner can be judged first Whether the points belong to the same target object category, in the case of determining that any upper left corner point and lower right corner point belong to the same target object category, continue to determine the corner position of any upper left corner point and lower right corner point in the image to be detected Whether the information constitutes the same candidate detection frame.
  • the upper left corner point should be located at the upper left of the lower right corner point in the image to be detected, and the position information of the corner point based on the upper left corner point and the lower right corner point, such as the position coordinates of the upper left corner point in the image to be detected, and the right If the position coordinates of the lower corner point in the image to be detected cannot be such that the upper left corner point is located at the upper left corner of the lower right corner point, the upper left corner point and the lower right corner point cannot constitute a candidate corner point pair.
  • a coordinate system can be established in the image to be detected, the coordinate system includes X axis and Y axis, and the corner position information of each corner point in the coordinate system includes the abscissa value in the X axis direction and the Y axis.
  • the ordinate value in the axis direction, and then in the coordinate system, according to the corresponding coordinate value of each corner point in the coordinate system, the upper left corner point and the lower right corner point that can constitute the candidate detection frame are filtered.
  • S802 Determine the position information of the center point to which the corner point points based on the corner position information of each corner point in the image to be detected in each candidate corner point pair and the centripetal offset tensor corresponding to the corner point.
  • the position information of the center point pointed to by the upper left corner point in each candidate corner point pair can be determined according to the following formula (5), and the center point pointed to by the lower right corner point in each candidate corner point pair can be determined according to the following formula (6) location information:
  • the central area information here can be preset, which is defined as the coordinate range of the central area frame that coincides with the center of the detection frame of the target object. Through the coordinate range of the central area frame, it is possible to detect whether the candidate detection frame contains a unique target.
  • the position information of the center point pointed to by the upper left corner point and the center point position information pointed to by the lower right corner point are located within the coordinate range of the central area frame, in the case where the coordinate range of the central area frame is small. Then, it can be considered that the position information of the center point pointed to by the upper left corner point is relatively close to the position information of the center point pointed to by the lower right corner point, so as to determine that the candidate detection frame formed by the candidate corner point pair contains a unique target object.
  • the center area information corresponding to the candidate corner point pair based on the corner point position information of each corner point in the candidate corner point pair in the image to be detected it may include:
  • the m-th candidate corner point pair is composed of the i-th upper left corner point and the j-th lower right corner point
  • the m-th candidate corner point pair can be determined according to the following formulas (7) to (10) Corresponding corner position information of the central area frame:
  • the coordinate range of the center area frame can be determined according to the following formula (11):
  • R central(m) represents the coordinate range of the central area frame corresponding to the m-th candidate corner point pair.
  • the coordinate range of the central area frame passes through the x(m) value in the X-axis direction and the Y-axis direction.
  • y(m) value where the range of x(m) satisfies The range of y(m) satisfies
  • S804 Determine the detection frame of the target object in the candidate detection frame based on the position information of the center point pointed to by each corner point in each candidate corner point pair and the center area information corresponding to the candidate corner point pair.
  • the center area information corresponding to each candidate corner point pair is used to restrict the proximity between the center point position information pointed to by each corner point in the candidate corner point pair, and each corner point in a certain candidate corner point pair
  • the position information of the pointed center point is located in the center area frame corresponding to the candidate corner point pair
  • the center point of each corner point in the candidate corner point pair is relatively close
  • the candidate corner point pair constitutes
  • the target object contained in the candidate detection frame of is the only target object.
  • the corner point position information of the corner points is used to first determine the candidate corner point pairs that can constitute the candidate detection frame, and then based on each corner point in the candidate corner point pair.
  • the centripetal offset tensor of the object is used to determine whether the target object surrounded by the candidate detection frame is the same target object, so that the detection frame of all target objects in the image to be detected can be detected more accurately.
  • the detection frame of the target object is determined in the candidate detection frame based on the position information of the center point pointed to by each corner point in each candidate corner point pair and the center area information corresponding to the candidate corner point pair, As shown in Figure 9, the following steps S901 to S903 may be included:
  • S901 Determine a valid candidate corner point pair based on the position information of the center point pointed to by each corner point in each candidate corner point pair and the center area information corresponding to the candidate corner point pair.
  • the candidate corner point pair is regarded as a valid candidate corner point pair.
  • the following formula (12) can be used to determine whether the candidate corner point pair formed by the i-th upper left corner point and the j-th lower right corner point is a valid candidate corner point pair, that is, the i-th upper left corner point and the j-th corner point pair are judged Whether the coordinate range of the m-th central area frame corresponding to the candidate detection frame formed by the lower right corner point and the center point position information pointed to by the i-th upper left corner point and the j-th lower right corner point respectively meet the following formula (12):
  • the coordinate range of the m-th central area frame corresponding to the candidate detection frame formed by the i-th upper-left corner point and the j-th lower-right corner point, and the center point respectively pointed to by the i-th upper-left corner point and the j-th lower-right corner point When the position information satisfies the above formula (12), it means that the candidate corner point pair formed by the i-th upper left corner point and the j-th lower right corner point is a valid candidate corner point pair, and then continue to perform S902 on the valid candidate corner point Otherwise, if the candidate corner point pair formed by the i-th upper left corner point and the j-th lower right corner point is an invalid candidate corner point pair, continue to determine whether the i-th upper left corner point and other lower right corner points are A valid candidate corner point pair can be formed, and the subsequent steps can be executed after a valid candidate corner point pair is obtained.
  • the probability value corresponding to each corner point is used to indicate the probability value of the corresponding feature point of the corner point in the corner point heat map as the corner point.
  • the candidate detection frame corresponding to each valid candidate corner point pair such as the area formed by the center point of each corner point in the valid candidate corner point pair and the center area corresponding to the valid candidate corner point pair
  • the area relationship between the frames and the probability value corresponding to each corner point in the effective candidate corner point pair represent the score value of the candidate detection frame corresponding to each effective candidate corner point pair.
  • the candidate detection frame with the higher score is taken as The probability of the detection frame of the target object is relatively large, and the candidate detection frame is screened through this.
  • the score of the candidate detection frame corresponding to the valid candidate corner point pair can be determined according to the following formula (13):
  • s represents the score of the candidate detection frame corresponding to the valid candidate corner point pair formed by the i-th upper left corner point and the j-th lower right corner point;
  • s tl(i) represents the i-th upper left corner point at the upper left corner point
  • the corresponding feature point in the heat map is used as the probability value of the upper left corner point;
  • s br(j) represents the probability value of the j-th lower right corner point in the lower right corner point in the heat map as the lower right corner point.
  • S903 Determine the detection frame of the target object in the candidate detection frame based on the score of the candidate detection frame corresponding to each valid candidate corner point and the size of the overlapping area between adjacent candidate detection frames.
  • the overlap area can be determined by the size of the overlap area in the image to be detected. The following describes how to base each valid candidate corner point on the corresponding candidate detection frame score and the overlap area between adjacent candidate detection frames , To filter the detection frame of the target object.
  • the detection frame of the target object can be screened in multiple candidate detection frames by soft non-maximum suppression.
  • the candidate detection frame with the highest corresponding score can be used as For the detection frame of the target object, delete other candidate detection frames in the multiple candidate detection frames, so that the detection frame of the target object in the image to be detected can be obtained.
  • the instance information of the target object in the detection frame can be determined.
  • the object to be detected can be determined based on the detection frame of the target object and the initial feature map obtained by feature extraction of the image to be detected Instance information of the target object in the image.
  • the instance information here can be represented by a mask.
  • the mask here means that after instance segmentation of the target object in the image, the pixels of each target object are given at the pixel level, so the mask can be accurate to the edge of the object. In this way, a more accurate position of the target object in the image to be detected can be obtained; in addition, the shape of the target object can also be represented based on the mask, so that the determination of the target object's category can be verified based on the shape, and based on The shape of the target object represented by the mask is subjected to subsequent action analysis on the target object, which is not described in the embodiment of the present disclosure.
  • the instance information of the target object in the image to be detected based on the detection frame of the target object and the initial feature map obtained by feature extraction of the image to be detected, it may include:
  • the detection frame of the target object and the initial feature map corresponding to the image to be detected are input to the region of interest extraction network.
  • the region of interest extraction network can first extract the region of interest matching the size of the initial feature map, and then pass the alignment pool of interest
  • the feature data of the feature points of the initial feature map in the detection frame (that is, the region of interest) is obtained by transformation processing, and then the feature data of the feature points of the initial feature map in the detection frame are input into the mask prediction network.
  • Generate instance information of the target object the instance information can be expressed in the form of a mask, and then the mask of the target object can be expanded to the same size as the target object in the image to be detected, that is, the target object of the image to be detected can be obtained Instance information.
  • the initial feature map f can be corner points Pooling process to obtain the corner point pooled feature map p, and then perform the upper left corner point detection and feature adjustment on the corner point pooled feature map p, and the direction corresponding to the upper left corner point and the upper left corner point can be obtained.
  • Heart offset tensor the process of obtaining the upper left corner point is determined by the upper left corner point detection network.
  • the upper left corner point detection network includes the upper left corner point heat map prediction network and the upper left corner point local offset prediction network (none in Figure 10).
  • the feature adjustment network is first used to adjust the feature map p after the corner point pooling. This process includes determining the steering offset tensor corresponding to the upper left corner point. Then, based on the deformable convolution operation, the feature map p after the corner point pooling is adjusted to obtain the adjusted feature map g, and then through the convolution operation, the centripetal corresponding to the upper left corner point is determined The offset tensor.
  • the lower right corner point is determined by the lower right corner point detection network.
  • the centripetal offset tensor corresponding to the lower right corner point is obtained by feature adjustment and convolution operation.
  • the process is the same as the centripetal offset tensor corresponding to the upper left corner and the upper left corner point.
  • the determination process is similar, and then the detection frame of the target object is determined based on the centripetal offset tensor corresponding to the upper left corner point and the upper left corner point, and the centripetal offset tensor corresponding to the lower right corner point and the lower right corner point.
  • the region of interest is extracted based on the detection frame of the target object and the initial feature map f, and then the region of interest is aligned and pooled to obtain the feature of the region of interest (ie, the initial feature
  • the feature of the region of interest ie, the initial feature
  • the mask of the target object can be obtained, and then the size of the mask is enlarged, and the image to be detected is obtained
  • the mask image of the same size ie, the instance information of the target object).
  • the detection frame of the target object, the mask of the target object, and the target object category can be output, and the required results can be obtained according to the preset requirements, such as outputting the detection of the target object
  • the frame, or the output of the mask image of the target object, or both the detection frame of the target object and the mask image of the target object, and the category of the target object are output at the same time, which are not limited in the embodiment of the present disclosure.
  • the target detection method in the embodiments of the present disclosure may be implemented by a neural network, which is obtained by training using sample pictures containing labeled target sample objects.
  • the neural network of the target detection method proposed in the embodiment of the present disclosure can be obtained by training using the following steps, including steps S1101 to S1104:
  • the sample image here may include a positive sample that annotates the target sample object, and a negative sample that does not include the target sample object, and the target object contained in the positive sample may include multiple categories.
  • the positive samples labeled with the target sample objects can be divided into the target sample objects labeled with the detection frame and the target sample objects labeled with the mask.
  • the process of determining the corner position information of the sample corner point in the sample image and the centripetal offset tensor corresponding to each sample corner point is the same as the process of determining the corner point in the image to be detected as mentioned above.
  • the corner position information in and the centripetal offset tensor corresponding to each corner are similar, so I won’t repeat them here.
  • S1103 Predict the target sample object in the sample image based on the corner position information of each sample corner point in the sample image and the centripetal offset tensor corresponding to each sample corner point.
  • the process of predicting the target sample object in the sample image is the same as the method of determining the target object in the image to be detected as mentioned above, and will not be repeated here.
  • S1104 Adjust network parameter values of the neural network based on the predicted target sample object in the sample image and the labeled target sample object in the sample image.
  • a loss function can be introduced to determine the loss value corresponding to the target sample object prediction.
  • the network parameter value of the neural network can be adjusted through the loss value, for example, when the loss value is less than the set threshold, that is You can stop training to get the network parameter values of the neural network.
  • the detection frame of the target sample object, the mask of the target sample object, and the determination process of the target sample object's category are the same as the detection frame of the target object, the mask of the target object, and the category of the target object described above. The process is similar, so I won't repeat it here.
  • the neural network training method obtains a sample image, and based on the sample image, determines the corner position information of each sample corner point in the sample image, and the centripetal offset corresponding to each sample corner point. Based on the corner position information of each sample corner point in the sample image and the centripetal offset tensor corresponding to each sample corner point, the target sample object is detected in the sample image, because the sample corner point refers to the main Feature points, for example, the sample corner points can include the upper left sample corner point and the lower right sample corner point, where the upper left sample corner point refers to the intersection of the line corresponding to the upper contour of the target sample object and the line corresponding to the left contour of the target sample object.
  • the lower sample corner point refers to the intersection of the straight line corresponding to the lower contour of the target sample object and the straight line corresponding to the right contour of the target sample object.
  • the upper left sample corner and the lower right sample corner belong to the detection frame of the same target sample object.
  • the positions of the centripetal offset tensor corresponding to the upper left sample corner point and the lower right sample corner point should be relatively close. Therefore, the training method of the neural network proposed in the embodiment of the present disclosure is based on the representation of the target sample object being trained
  • the corner point position information of the position in the sample image, and the centripetal offset tensor corresponding to each sample corner point determine the sample corner point belonging to the same target sample object, and then based on the determined sample corner point, the sample corner point can be detected.
  • the embodiment of the present disclosure also provides a target detection device corresponding to the target detection method. Since the technical principle of the device in the embodiment of the disclosure is similar to the target detection method described in the embodiment of the disclosure, the implementation of the device can be referred to The implementation of the method will not repeat the repetition.
  • FIG. 12 it is a schematic diagram of a target detection device 1200 provided by an embodiment of the present disclosure.
  • the device includes: an acquisition part 1201, a determination part 1202, and a detection part 1203.
  • the acquiring part 1201 is configured to acquire the image to be detected
  • the determining part 1202 is configured to determine, based on the image to be detected, the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point, where the corner points represent the target object in the image to be detected Location;
  • the detection part 1203 is configured to determine the target object in the image to be detected based on the position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point.
  • the determining part 1202 is configured to:
  • the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point are determined.
  • the method includes:
  • the local offset information is used to indicate that the real physical point represented by the corresponding corner point is in the corner heat map Position offset information in;
  • each corner point is in the to-be-detected image.
  • the position information of the corner points in the image is determined.
  • the method includes:
  • the offset domain information contains multiple initial feature points associated with the feature point respectively pointing to their corresponding offset feature points The offset tensor;
  • the feature data of the feature points in the corner point pooled feature map are adjusted to obtain The adjusted feature map
  • centripetal offset tensor corresponding to each corner point is determined.
  • the corner heat map corresponding to the image to be detected includes a corner heat map corresponding to multiple channels, and each channel of the multiple channels corresponds to a preset object category; the determining part 1202 is in After being configured to determine the probability value of each feature point in the corner heat map as a corner point based on the corner heat map, it is also configured to:
  • the image to be detected contains the target object of the preset object category corresponding to the channel.
  • the detection part 1203 is configured to:
  • the detection part 1203 is configured to determine the target object in the image to be detected based on the position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point.
  • the case of the detection box includes:
  • the detection frame of the target object is determined in the candidate detection frame.
  • the detecting part 1203 is configured to be based on the corner position information of each corner point in each candidate corner point pair in the image to be detected.
  • the central region information corresponding to the candidate corner point pair it includes:
  • the coordinate range of the central area frame corresponding to the candidate corner point pair is determined.
  • the detection part 1203 is configured to be based on the position information of the center point pointed to by each corner point in each candidate corner point pair and the center area information corresponding to the candidate corner point pair.
  • the detection frame determines the detection frame of the target object it includes:
  • the detection frame of the target object is determined in the candidate detection frame.
  • the detection part 1203 is further configured to:
  • the instance information of the target object in the image to be detected is determined based on the detection frame of the target object and the initial feature map obtained by feature extraction of the image to be detected.
  • the detection part 1203 is configured to determine the instance information of the target object in the image to be detected based on the detection frame of the target object and the initial feature map obtained by feature extraction of the image to be detected, including :
  • the instance information of the target object in the image to be detected is determined.
  • the target detection device 1200 further includes a neural network training part 1204, and the neural network training part 1204 is configured to:
  • the neural network is trained using sample images that contain labeled target sample objects.
  • the neural network training part 1204 is configured to train the neural network according to the following steps:
  • the network parameter values of the neural network are adjusted.
  • parts may be parts of circuits, parts of processors, parts of programs or software, etc., of course, may also be units, modules, or non-modular.
  • an embodiment of the present disclosure further provides an electronic device 1300.
  • a schematic structural diagram of the electronic device 1300 provided by the embodiment of the present disclosure includes:
  • the target object in the image to be detected is determined.
  • the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and the computer program executes the steps of the target detection method in the foregoing method embodiment when the computer program is run by a processor.
  • the storage medium may be a volatile or non-volatile computer readable storage medium.
  • the embodiments of the present disclosure also provide a computer program, including computer-readable code, and when the computer-readable code runs in an electronic device, the processor in the electronic device executes the same as described in the first aspect.
  • the computer program product of the target detection method provided by the embodiment of the present disclosure includes a computer-readable storage medium storing program code, and the program code includes instructions that can be used to execute the steps of the target detection method described in the above method embodiment
  • the program code includes instructions that can be used to execute the steps of the target detection method described in the above method embodiment
  • the embodiments of the present disclosure also provide a computer program, which, when executed by a processor, implements any one of the methods in the foregoing embodiments.
  • the computer program product can be specifically implemented by hardware, software, or a combination thereof.
  • the computer program product is specifically embodied as a computer storage medium.
  • the computer program product is specifically embodied as a software product, such as a software development kit (SDK) and so on.
  • SDK software development kit
  • the working process of the system and device described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation.
  • multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
  • the embodiments of the present disclosure provide a target detection method, device, electronic equipment, and computer-readable storage medium.
  • the target detection method includes: acquiring an image to be detected; and determining that each corner point is in the image based on the image to be detected.
  • the corner points represent the position of the target object in the image to be detected; based on the angle of each corner point in the image to be detected
  • the point position information and the centripetal offset tensor corresponding to each corner point are used to determine the target object in the image to be detected.
  • the target detection method proposed in the embodiments of the present disclosure can determine the corner points belonging to the same target object based on the corner position information of each corner point in the image to be detected and the centripetal offset tensor corresponding to each corner point, and then based on The determined corner point can detect the same target object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
PCT/CN2020/135967 2020-01-22 2020-12-11 目标检测方法、装置、电子设备及计算机可读存储介质 WO2021147563A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021557733A JP2022526548A (ja) 2020-01-22 2020-12-11 ターゲット検出方法、装置、電子機器およびコンピュータ可読記憶媒体
KR1020217030884A KR20210129189A (ko) 2020-01-22 2020-12-11 타깃 검출 방법, 장치, 전자 기기 및 컴퓨터 판독 가능한 저장 매체

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010073142.6 2020-01-22
CN202010073142.6A CN111242088B (zh) 2020-01-22 2020-01-22 一种目标检测方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021147563A1 true WO2021147563A1 (zh) 2021-07-29

Family

ID=70870017

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135967 WO2021147563A1 (zh) 2020-01-22 2020-12-11 目标检测方法、装置、电子设备及计算机可读存储介质

Country Status (4)

Country Link
JP (1) JP2022526548A (ja)
KR (1) KR20210129189A (ja)
CN (1) CN111242088B (ja)
WO (1) WO2021147563A1 (ja)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113936458A (zh) * 2021-10-12 2022-01-14 中国联合网络通信集团有限公司 高速公路拥堵判别方法、装置、设备及介质
CN113971738A (zh) * 2021-10-28 2022-01-25 成都数之联科技有限公司 图像检测方法、装置、电子设备及存储介质
CN114067365A (zh) * 2021-11-23 2022-02-18 广东工业大学 一种基于中心注意向心网络的安全帽佩戴检测方法和系统
CN114358054A (zh) * 2021-12-16 2022-04-15 中国人民解放军战略支援部队信息工程大学 复杂环境下宽带无线通信信号检测方法及系统
CN115644933A (zh) * 2022-11-17 2023-01-31 深圳微创踪影医疗装备有限公司 导管冲刷控制方法、装置、计算机设备、存储介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242088B (zh) * 2020-01-22 2023-11-28 上海商汤临港智能科技有限公司 一种目标检测方法、装置、电子设备及存储介质
CN111681284A (zh) * 2020-06-09 2020-09-18 商汤集团有限公司 一种角点检测方法、装置、电子设备及存储介质
CN112215840B (zh) * 2020-10-30 2024-07-16 上海商汤临港智能科技有限公司 图像检测、行驶控制方法、装置、电子设备及存储介质
CN112270278A (zh) * 2020-11-02 2021-01-26 重庆邮电大学 一种基于关键点的蓝顶房检测方法
CN112348894B (zh) * 2020-11-03 2022-07-29 中冶赛迪重庆信息技术有限公司 废钢货车位置及状态识别方法、系统、设备及介质
CN112528847A (zh) * 2020-12-08 2021-03-19 北京嘀嘀无限科技发展有限公司 一种目标检测方法、装置、电子设备及存储介质
CN112733653A (zh) * 2020-12-30 2021-04-30 智车优行科技(北京)有限公司 目标检测方法和装置、计算机可读存储介质、电子设备
CN113822841B (zh) * 2021-01-29 2022-05-20 深圳信息职业技术学院 一种污水杂质结块检测方法、装置及相关设备
CN112699856A (zh) * 2021-03-24 2021-04-23 成都新希望金融信息有限公司 人脸装饰品识别方法、装置、电子设备及存储介质
CN113033539B (zh) * 2021-03-30 2022-12-06 北京有竹居网络技术有限公司 练字格检测方法、装置、可读介质及电子设备
CN113095228B (zh) * 2021-04-13 2024-04-30 地平线(上海)人工智能技术有限公司 图像中的目标检测方法、装置及计算机可读存储介质
CN113569911A (zh) * 2021-06-28 2021-10-29 北京百度网讯科技有限公司 车辆识别方法、装置、电子设备及存储介质
CN113743218B (zh) * 2021-08-03 2024-05-31 科大讯飞股份有限公司 一种车牌识别方法、车牌识别装置和计算机可读存储介质
CN114332977A (zh) * 2021-10-14 2022-04-12 北京百度网讯科技有限公司 关键点检测方法、装置、电子设备及存储介质
CN113920538B (zh) * 2021-10-20 2023-04-14 北京多维视通技术有限公司 目标检测方法、装置、设备、存储介质及计算机程序产品
CN113850238B (zh) * 2021-11-29 2022-03-04 北京世纪好未来教育科技有限公司 文档检测方法、装置、电子设备及存储介质
CN116309587A (zh) * 2023-05-22 2023-06-23 杭州百子尖科技股份有限公司 一种布料瑕疵检测方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018503A1 (en) * 2015-12-11 2018-01-18 Tencent Technology (Shenzhen) Company Limited Method, terminal, and storage medium for tracking facial critical area
CN109670503A (zh) * 2018-12-19 2019-04-23 北京旷视科技有限公司 标识检测方法、装置和电子系统
CN110490256A (zh) * 2019-08-20 2019-11-22 中国计量大学 一种基于关键点热图的车辆检测方法
CN111242088A (zh) * 2020-01-22 2020-06-05 上海商汤临港智能科技有限公司 一种目标检测方法、装置、电子设备及存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040259667A1 (en) * 2003-06-02 2004-12-23 Simon Berdugo Motorized image rotating target apparatus for all sports
US20100317466A1 (en) * 2009-05-24 2010-12-16 Semple Kerry J Miniature Kick Bag Game and Apparatus Kit
JP6094949B2 (ja) * 2012-06-29 2017-03-15 日本電気株式会社 画像処理装置、画像処理方法、及びプログラム
CN106557940B (zh) * 2015-09-25 2019-09-17 杭州海康威视数字技术股份有限公司 信息发布终端和方法
CN106683091B (zh) * 2017-01-06 2019-09-24 北京理工大学 一种基于深度卷积神经网络的目标分类及姿态检测方法
CN108229307B (zh) * 2017-11-22 2022-01-04 北京市商汤科技开发有限公司 用于物体检测的方法、装置和设备
CN108446707B (zh) * 2018-03-06 2020-11-24 北方工业大学 基于关键点筛选及dpm确认的遥感图像飞机检测方法
US10872406B2 (en) * 2018-04-13 2020-12-22 Taiwan Semiconductor Manufacturing Company, Ltd. Hot spot defect detecting method and hot spot defect detecting system
CN109801335A (zh) * 2019-01-08 2019-05-24 北京旷视科技有限公司 图像处理方法、装置、电子设备和计算机存储介质
CN110378891A (zh) * 2019-07-24 2019-10-25 广东工业大学 一种基于太赫兹图像的危险品检测方法、装置及设备
CN110532894B (zh) * 2019-08-05 2021-09-03 西安电子科技大学 基于边界约束CenterNet的遥感目标检测方法
CN110543838A (zh) * 2019-08-19 2019-12-06 上海光是信息科技有限公司 车辆信息检测的方法及装置
CN110647931A (zh) * 2019-09-20 2020-01-03 深圳市网心科技有限公司 物体检测方法、电子设备、系统及介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018503A1 (en) * 2015-12-11 2018-01-18 Tencent Technology (Shenzhen) Company Limited Method, terminal, and storage medium for tracking facial critical area
CN109670503A (zh) * 2018-12-19 2019-04-23 北京旷视科技有限公司 标识检测方法、装置和电子系统
CN110490256A (zh) * 2019-08-20 2019-11-22 中国计量大学 一种基于关键点热图的车辆检测方法
CN111242088A (zh) * 2020-01-22 2020-06-05 上海商汤临港智能科技有限公司 一种目标检测方法、装置、电子设备及存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113936458A (zh) * 2021-10-12 2022-01-14 中国联合网络通信集团有限公司 高速公路拥堵判别方法、装置、设备及介质
CN113971738A (zh) * 2021-10-28 2022-01-25 成都数之联科技有限公司 图像检测方法、装置、电子设备及存储介质
CN114067365A (zh) * 2021-11-23 2022-02-18 广东工业大学 一种基于中心注意向心网络的安全帽佩戴检测方法和系统
CN114358054A (zh) * 2021-12-16 2022-04-15 中国人民解放军战略支援部队信息工程大学 复杂环境下宽带无线通信信号检测方法及系统
CN115644933A (zh) * 2022-11-17 2023-01-31 深圳微创踪影医疗装备有限公司 导管冲刷控制方法、装置、计算机设备、存储介质
CN115644933B (zh) * 2022-11-17 2023-08-22 深圳微创踪影医疗装备有限公司 导管冲刷控制方法、装置、计算机设备、存储介质

Also Published As

Publication number Publication date
CN111242088A (zh) 2020-06-05
CN111242088B (zh) 2023-11-28
JP2022526548A (ja) 2022-05-25
KR20210129189A (ko) 2021-10-27

Similar Documents

Publication Publication Date Title
WO2021147563A1 (zh) 目标检测方法、装置、电子设备及计算机可读存储介质
US10885365B2 (en) Method and apparatus for detecting object keypoint, and electronic device
US11443445B2 (en) Method and apparatus for depth estimation of monocular image, and storage medium
US20180225527A1 (en) Method, apparatus, storage medium and device for modeling lane line identification, and method, apparatus, storage medium and device for identifying lane line
US9600898B2 (en) Method and apparatus for separating foreground image, and computer-readable recording medium
JP7051267B2 (ja) 画像検出方法、装置、電子設備、記憶媒体、及びプログラム
WO2018025831A1 (ja) 人流推定装置、表示制御装置、人流推定方法および記録媒体
WO2018054329A1 (zh) 物体检测方法和装置、电子设备、计算机程序和存储介质
JP6345147B2 (ja) ステレオ画像の対において物体を検出する方法
US11341680B2 (en) Gaze point estimation processing apparatus, gaze point estimation model generation apparatus, gaze point estimation processing system, and gaze point estimation processing method
CN106845338B (zh) 视频流中行人检测方法与系统
CN108229494B (zh) 网络训练方法、处理方法、装置、存储介质和电子设备
US20240037898A1 (en) Method for predicting reconstructabilit, computer device and storage medium
CN114511661A (zh) 图像渲染方法、装置、电子设备及存储介质
CN115797735A (zh) 目标检测方法、装置、设备和存储介质
CN113343981A (zh) 一种视觉特征增强的字符识别方法、装置和设备
CN108229281B (zh) 神经网络的生成方法和人脸检测方法、装置及电子设备
JP7014005B2 (ja) 画像処理装置及び方法、電子機器
CN111027551B (zh) 图像处理方法、设备和介质
WO2022257433A1 (zh) 图像的特征图的处理方法及装置、存储介质、终端
CN110135474A (zh) 一种基于深度学习的倾斜航空影像匹配方法和系统
CN115375742A (zh) 生成深度图像的方法及系统
JP2015219756A (ja) 画像比較方法、装置並びにプログラム
CN113379838B (zh) 虚拟现实场景的漫游路径的生成方法和存储介质
CN110287984A (zh) 基于主要特征信息的梯度图像匹配方法、装置、电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915404

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217030884

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021557733

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915404

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19/05/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20915404

Country of ref document: EP

Kind code of ref document: A1