CN110751149A - Target object labeling method and device, computer equipment and storage medium - Google Patents

Target object labeling method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110751149A
CN110751149A CN201910882199.8A CN201910882199A CN110751149A CN 110751149 A CN110751149 A CN 110751149A CN 201910882199 A CN201910882199 A CN 201910882199A CN 110751149 A CN110751149 A CN 110751149A
Authority
CN
China
Prior art keywords
marking
annotation
image
processed
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910882199.8A
Other languages
Chinese (zh)
Other versions
CN110751149B (en
Inventor
叶明�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910882199.8A priority Critical patent/CN110751149B/en
Publication of CN110751149A publication Critical patent/CN110751149A/en
Application granted granted Critical
Publication of CN110751149B publication Critical patent/CN110751149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a target object labeling method and device based on research and development management, computer equipment and a storage medium. The method comprises the following steps: when an annotation instruction is received, the annotation operation of the image to be processed is obtained, and the annotation operation carries the annotation point identification on the image to be processed, so that at least four annotation points are obtained. And setting a plane coordinate system constructed by the first coordinate axis and the second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system. And sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions, and marking the target object on the image to be processed by adopting the target object marking frames. By adopting the method, the method is different from the mode that the coordinates of the upper left corner and the lower right corner of the target are directly selected to generate the object frame in the prior art, and the accuracy of the generated object marking frame and the accuracy of framing and marking the target object can be improved.

Description

Target object labeling method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a target object labeling method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology and the increasing demand for image processing, the application of object detection and labeling technology is relatively wide, and the main purpose of the technology is to input a picture, draw a rectangular frame with a mouse to select an interested area in the picture, then save the selected interested area, and simultaneously output the coordinates of the upper left corner point of the image of the interested area in the original image and the width and height of the selected image.
However, the existing object marking frame directly selects coordinates of the upper left corner and the lower right corner of the target to generate the object frame, and the alignment mode of the coordinate positions of the upper left corner and the lower right corner of the marked object is too simple, so that a large error exists in the acquisition of the coordinate positions.
Disclosure of Invention
In view of the above, it is necessary to provide a target object labeling method, apparatus, computer device and storage medium capable of improving the framing accuracy of a target object.
A target object labeling method, the method comprising:
when an annotation instruction is received, acquiring annotation operation of an image to be processed, wherein the annotation operation carries an annotation point identifier on the image to be processed, and acquiring at least four annotation points;
setting a plane coordinate system constructed by a first coordinate axis and a second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system;
sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions;
and marking the target object for the image to be processed by adopting the target object marking frame.
In one embodiment, the obtaining of the annotation operation on the image to be processed, where the annotation operation carries an annotation point identifier on the image to be processed, includes:
detecting a marking operation, wherein the marking operation carries the position of a current operating point;
acquiring current image data corresponding to the current operating point position in the image to be processed, and amplifying and displaying the acquired current image data, wherein the current operating point position is located at the center position of the current image data;
detecting a moving operation continuous with the labeling operation, acquiring a position coordinate after moving in the moving operation process, acquiring moving image data corresponding to the position coordinate after moving in the image to be processed, and amplifying and displaying the acquired moving image data, wherein the position coordinate after moving is located at the center position of the moving image data;
and when the marking operation is finished, determining the position coordinate after the movement as a marking point of the marking operation on the image to be processed.
In one embodiment, before the obtaining the annotation operation on the image to be processed, the method further includes:
when an annotation instruction of an image to be processed is received, acquiring annotation information corresponding to the annotation instruction, and generating annotation operation corresponding to the annotation information; the annotation information comprises an annotation mode, an application scene of the image to be processed and annotation precision.
In one embodiment, when an annotation instruction of an image to be processed is received, acquiring annotation information corresponding to the annotation instruction, and generating an annotation operation corresponding to the annotation information, includes:
when an annotation instruction of an image to be processed is received, determining an application scene of the image to be processed corresponding to the annotation instruction; the application scenes of the images to be processed comprise but are not limited to an insurance claim settlement service scene and a vehicle damage assessment service scene;
acquiring preset marking precision corresponding to the application scene of the image to be processed; the preset marking precision comprises a first marking precision, a second marking precision and a third marking precision; the marking precision is reduced in sequence from the first marking precision to the third marking precision;
acquiring a labeling mode corresponding to the labeling information; the marking mode comprises character marking, arrow marking and annotation frame marking;
and generating the picture marking operation of the image to be processed according to the corresponding marking mode and marking precision.
In one embodiment, before the sequentially connecting and generating the target object marking frames based on the extracted target marking poles meeting the preset condition, the method further includes:
training the deep learning model according to the training data to obtain a trained pole correction model;
and inputting the coordinate values of the marking poles into the trained pole correction model, and outputting the corrected coordinate values of the marking poles to obtain the target marking poles meeting preset conditions.
In one embodiment, before the training the deep learning model according to the training data to obtain the post-training pole correction model, the method further includes:
acquiring actual scene data, inputting the actual scene data into the depth detection model, and generating prediction data;
screening the prediction data according to a preset screening rule to obtain fuzzy data which accords with the screening rule;
and correcting the fuzzy data to generate training data.
In one embodiment, the training the deep learning model according to the training data to obtain a trained pole correction model includes:
iteratively executing the extraction and the correction of the fuzzy data, and updating the training data in a preset period;
and carrying out fine adjustment on the depth detection model according to the updated training data until the fuzzy data are not generated, so as to obtain a trained pole correction model.
A target object annotation apparatus, the apparatus comprising:
the annotation instruction receiving module is used for acquiring annotation operation of the image to be processed when an annotation instruction is received, wherein the annotation operation carries an annotation point identifier on the image to be processed, and at least four annotation points are acquired;
the plane coordinate system construction module is used for setting a plane coordinate system constructed by a first coordinate axis and a second coordinate axis, and each marking point has a determined coordinate value in the plane coordinate system;
the target object marking frame generating module is used for sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions;
and the target object labeling module is used for labeling the target object of the image to be processed by adopting the target object labeling frame.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
when an annotation instruction is received, acquiring annotation operation of an image to be processed, wherein the annotation operation carries an annotation point identifier on the image to be processed, and acquiring at least four annotation points;
setting a plane coordinate system constructed by a first coordinate axis and a second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system;
sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions;
and marking the target object for the image to be processed by adopting the target object marking frame.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
when an annotation instruction is received, acquiring annotation operation of an image to be processed, wherein the annotation operation carries an annotation point identifier on the image to be processed, and acquiring at least four annotation points;
setting a plane coordinate system constructed by a first coordinate axis and a second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system;
sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions;
and marking the target object for the image to be processed by adopting the target object marking frame.
According to the target object labeling method, the target object labeling device, the computer equipment and the storage medium, when a labeling instruction is received, the labeling operation of the image to be processed is obtained, the labeling operation carries the labeling point identification on the image to be processed, and at least four labeling points are obtained. And setting a plane coordinate system constructed by the first coordinate axis and the second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system. The target marking poles which meet the preset conditions and are extracted are sequentially connected to generate a target object marking frame, and then the target object marking frame is adopted, so that the target object marking of the image to be processed can be realized, the method is different from the mode that the object frame is generated by directly selecting the coordinates of the upper left corner and the lower right corner of the target in the prior art, the accuracy of the generated object marking frame is further improved, and the accuracy of framing and marking the target object can be improved.
Drawings
FIG. 1 is a flowchart illustrating a method for labeling a target object according to an embodiment;
FIG. 2 is a flowchart illustrating a method for labeling a target object according to another embodiment;
FIG. 3 is a block diagram illustrating an exemplary embodiment of a target object labeling apparatus;
FIG. 4 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The target object labeling method can be applied to a user terminal. When receiving a marking instruction sent by a user, the user terminal acquires marking operation of an image to be processed, wherein the marking operation carries marking point identification on the image to be processed, and at least four marking points are acquired. And setting a plane coordinate system constructed by the first coordinate axis and the second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system. And generating a target object marking frame based on the extracted target marking pole meeting the preset condition, and marking the target object on the image to be processed by adopting the target object marking frame. The user terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
In an embodiment, as shown in fig. 1, a target object labeling method is provided, which is described by taking an example that the method is applied to a user terminal, and includes the following steps:
step S102, when receiving an annotation instruction, obtaining an annotation operation of the image to be processed, wherein the annotation operation carries an annotation point identifier on the image to be processed, and obtaining at least four annotation points.
Specifically, the user can send out a labeling instruction on an application program of the terminal, including a mouse click operation and a keyboard input operation, which can both realize sending of the labeling instruction. The user can click a marking button of the application program marking interface by using a mouse, wherein the marking mode is specifically selected, the application scene of the image to be processed and the like are included, or a specific marking instruction is input by the application program marking interface, and the marking mode, the application scene, the marking result accuracy and the like are included. And when the terminal detects that the application program has the marking instruction sent by the user, performing marking operation corresponding to the marking information of the marking instruction.
The application scene can include human body labeling and vehicle labeling, specifically, when the human body image needs to be labeled, the upper half part of the human body image, namely the head part, can be labeled, the whole human body can be labeled, and the labeled human body image can be applied to user identity verification such as insurance claim settlement services. And under the scene of vehicle identification, the vehicle in the image can be labeled to obtain a clear vehicle image, and the labeled vehicle image can be applied to damage assessment business, vehicle insurance claim settlement business and the like of the vehicle.
Further, by identifying each marking point on the image to be processed and acquiring the coordinate value of each marking point, the coordinate value of each marking point comprises a first coordinate value and a second coordinate value, and then comparing the first coordinate value size or the second coordinate value size of each marking point respectively, the target marking pole meeting the preset condition is obtained. Because the annotation operation carries the annotation point identifier on the image to be processed, and the corresponding target object annotation frame is formed according to the requirement of the annotation operation, at least four annotation points are needed, when the annotation instruction is received, the annotation operation of the image to be processed and at least four annotation points carried by the annotation operation on the image to be processed are obtained.
And step S104, setting a plane coordinate system constructed by the first coordinate axis and the second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system.
Specifically, a plane coordinate system constructed by setting a first coordinate axis and a second coordinate axis is set, and coordinate values determined by the labeling points in the plane coordinate system are determined. Based on the coordinate values of all the marking points, extracting four marking poles in all the marking points, wherein the marking poles meet any one of the following conditions: and marking the coordinate value of the pole on the first coordinate axis as the minimum value or the maximum value of all the marked points in the coordinate values of the first coordinate axis, or marking the coordinate value of the pole on the second coordinate axis as the minimum value or the maximum value of all the marked points in the coordinate values of the second coordinate axis.
Further, based on the coordinate values of the labeling points, extracting four labeling poles in all the labeling points specifically includes: and identifying each marking point on the image to be processed, and acquiring the coordinate value of each marking point, wherein the coordinate value of each marking point comprises a first coordinate value and a second coordinate value. And respectively comparing the first coordinate value size or the second coordinate value size of each marking point to obtain four marking poles meeting the preset condition.
Wherein the preset conditions include: and marking the coordinate value of the pole on the first coordinate axis as the minimum value or the maximum value of all the marked points in the coordinate values of the first coordinate axis, or marking the coordinate value of the pole on the second coordinate axis as the minimum value or the maximum value of all the marked points in the coordinate values of the second coordinate axis.
In this embodiment, the first coordinate axis may be an abscissa axis, the second coordinate axis may be an ordinate axis, the abscissa axis is used to indicate a length of the image to be processed, the ordinate axis is used to indicate a height of the image to be processed, and a corresponding value range may be formed by setting a maximum value and a minimum value for the abscissa axis and the ordinate axis, respectively. The image importing method comprises the steps of selecting a first quadrant of four quadrants formed by an abscissa axis and an ordinate axis as an image importing area, and importing an image to be processed into the first quadrant after the image to be processed is obtained. The extracted four marking poles are four marking points which correspond to the maximum abscissa value, the maximum ordinate value, the minimum abscissa value and the minimum ordinate value of all the marking points on the image to be processed respectively.
And step S106, generating a target object marking frame based on the extracted target marking pole meeting the preset condition.
Specifically, generating the target object marking frame based on the extracted target marking pole meeting the preset condition specifically includes the following steps:
and when the coordinate value of the marking pole on the first coordinate axis is the minimum value or the maximum value of all the marking points in the coordinate values of the first coordinate axis, determining a second coordinate axis parallel straight line which passes through the marking pole and is parallel to the second coordinate axis. And when the coordinate value of the marking pole on the second coordinate axis is the minimum value or the maximum value of all the marking points in the coordinate values of the second coordinate axis, determining a first coordinate axis parallel straight line which passes through the marking pole and is parallel to the first coordinate axis. By determining four intersection points of the first coordinate axis parallel straight line and the second coordinate axis parallel straight line, the target object marking frame can be generated based on the first coordinate axis parallel straight line, the second coordinate axis parallel straight line and the four intersection points.
Based on the coordinate values of all the marking points, extracting four marking poles in all the marking points, wherein the marking poles meet any one of the following preset conditions: and marking the coordinate value of the pole on the first coordinate axis as the minimum value or the maximum value of all the marked points in the coordinate values of the first coordinate axis, or marking the coordinate value of the pole on the second coordinate axis as the minimum value or the maximum value of all the marked points in the coordinate values of the second coordinate axis.
Further, in the present embodiment, the first coordinate axis may be an abscissa axis, and the second coordinate axis may be an ordinate axis. And when the coordinate value of the extracted marking pole on the abscissa axis is the maximum or minimum abscissa value of all the marking points, determining an ordinate parallel straight line which passes through the marking pole and is parallel to the ordinate axis. And when the coordinate value of the extracted marking pole on the ordinate axis is the maximum ordinate value or the minimum ordinate value of all the marking points, determining a straight line which passes through the marking pole and is parallel to the abscissa axis. The target object marking frame is generated by determining two parallel straight lines of an abscissa axis and four intersection points of two parallel straight lines of an ordinate axis and based on the parallel straight lines of the abscissa axis, the parallel straight lines of the ordinate axis and the four determined intersection points, the target object marking frame can accurately frame a target object on an image to be processed, and all pixel points of the target object are framed in the target object marking frame.
And step S108, adopting the target object marking frame to mark the target object on the image to be processed.
Specifically, the target object on the image to be processed is labeled by adopting the target object labeling frame. The target object marking frame on the image to be processed can be framed and labeled by extracting all labeling poles corresponding to the target object on the image to be processed and generating the target object marking frame based on the labeling poles.
In the target object labeling method, when a labeling instruction is received, the labeling operation of the image to be processed is obtained, the labeling operation carries the labeling point identification on the image to be processed, and at least four labeling points are obtained. And setting a plane coordinate system constructed by the first coordinate axis and the second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system. The target marking poles which meet the preset conditions and are extracted are sequentially connected to generate a target object marking frame, and then the target object marking frame is adopted, so that the target object marking of the image to be processed can be realized, the method is different from the mode that the object frame is generated by directly selecting the coordinates of the upper left corner and the lower right corner of the target in the prior art, the accuracy of the generated object marking frame is further improved, and the accuracy of framing and marking the target object can be improved.
In one embodiment, as shown in fig. 2, a target object labeling method is provided, which further includes the following steps:
step S202, a marking operation is detected, and the marking operation carries the position of the current operating point.
Specifically, when the terminal detects the marking operation, the position of the current operation point carried by the marking operation needs to be acquired, the acquired position of the current operation point can be amplified, and the display size of the marking pole can be checked.
Step S204, acquiring current image data corresponding to the current operating point position in the image to be processed, and amplifying and displaying the acquired current image data, wherein the current operating point position is located at the center position of the current image data.
Specifically, after a current operating point position carried by a labeling operation is acquired, current image data corresponding to the current operating point is acquired from an image to be processed based on the current operating point position, when the acquired current image data is amplified and displayed, mouse pressing operation is performed for a user instead of mouse clicking amplification operation performed by the user, when the mouse pressing operation is detected, it is indicated that the corresponding current image data needs to be amplified when the pressing operation is performed, a magnifying glass image of a pressing area is generated, and a central area is a mouse pressing position.
And S206, detecting the moving operation continuous with the labeling operation, acquiring the position coordinate after moving in the moving operation process, acquiring the image data after moving corresponding to the position coordinate after moving in the image to be processed, and amplifying and displaying the acquired image data after moving, wherein the position coordinate after moving is positioned at the central position of the image data after moving.
Specifically, the user may perform a mouse movement operation while pressing the mouse, and may determine whether the user performs the mouse movement operation by detecting the movement operation that is continuous with the labeling operation. In the moving process, the position of the mark point when the mouse moving operation is finished, namely the position coordinate after the movement is obtained, the image after the movement corresponding to the position after the movement is obtained from the image to be processed, the image after the movement corresponding to the position coordinate after the movement is amplified and displayed, the central position is the mouse pressing operation and the moving operation, and the mark point corresponding to the coordinate position after the movement is finished simultaneously is the position coordinate after the movement also located at the central position of the image data after the movement.
And step S208, when the marking operation is finished, determining the position coordinate after the movement as a marking point of the marking operation on the image to be processed.
Specifically, after the marking operation is finished, the moved coordinates located in the center position of the moved image data are determined as marking points of the marking operation on the image to be processed, the display sizes of the marking points and the original sizes after the marking operation are finished do not change correspondingly along with mouse clicking, pressing and releasing operations, and the marking accuracy of the marking points can be guaranteed.
In the target object marking method, by acquiring the current image data corresponding to the current operating point position in the image to be processed, and the obtained current image data is amplified and displayed, and the position of the current operating point is positioned at the central position of the current image data, by detecting the continuous movement operation with the labeling operation, acquiring the moved image data corresponding to the moved position coordinate in the image to be processed, amplifying and displaying the acquired moved image data, wherein the moved position coordinate is positioned at the central position of the moved image data, and when the marking operation is finished, the position coordinates after the movement are determined as marking points of the marking operation on the image to be processed, and the obtained display size of the marking points is irrelevant to the amplifying operation and the moving operation in the marking operation, so that the display size of the marking points is prevented from being changed excessively, and the marking precision of the marking points can be improved.
In one embodiment, before the obtaining of the annotation operation on the image to be processed, the method further comprises:
when an annotation instruction of an image to be processed is received, acquiring annotation information corresponding to the annotation instruction, and generating annotation operation corresponding to the annotation information; the annotation information comprises an annotation mode, an application scene of the image to be processed and annotation precision, wherein the image to be processed can be imported by a user, or the user inputs the storage position of the image to be processed, the terminal obtains the image to be processed according to the storage path, obtains an online storage path input by the user when the image to be processed is the image which is not locally stored, and downloads the image to be processed in real time according to the online storage path. And the annotation information corresponding to the annotation instruction comprises an annotation mode, an application scene of the image to be processed and annotation precision.
Specifically, the labeling manner indicates a plurality of selectable labeling manners on the labeling page, and may be a plurality of labeling manners such as text labeling, arrow directional labeling, or annotation frame indication labeling after framing the target object marking frame. The application scene of the image to be processed comprises the following steps: 1) marking the users in the images to be processed in the insurance claim settlement service, and verifying the identities of the marked users; 2) in the vehicle damage assessment service, the vehicles in the images to be processed are identified, and the vehicles are framed and labeled by using the target object marking frame, and other scenes are obtained. The annotation precision comprises a high annotation precision requirement, a medium annotation precision requirement and a low annotation precision requirement, and the corresponding annotation precision can be set according to actual requirements aiming at application scenes of different images to be processed. In the step, when the annotation instruction of the image to be processed is received, the annotation information corresponding to the annotation instruction is obtained, the annotation operation corresponding to the annotation information is generated, and the generated annotation operation is obtained and executed, so that the requirements of a user can be better met.
In one embodiment, when an annotation instruction of an image to be processed is received, obtaining annotation information corresponding to the annotation instruction, and generating an annotation operation corresponding to the annotation information, the method includes the following steps:
when an annotation instruction of an image to be processed is received, determining an application scene of the image to be processed corresponding to the annotation instruction; the application scenes of the images to be processed include but are not limited to an insurance claim settlement service scene and a vehicle damage assessment service scene;
acquiring preset marking precision corresponding to an application scene of an image to be processed; the preset marking precision comprises a first marking precision, a second marking precision and a third marking precision; the size of the marking precision is sequentially reduced from the first marking precision to the third marking precision;
acquiring a labeling mode corresponding to the labeling information; the marking mode comprises character marking, arrow marking and annotation frame marking;
and generating the picture marking operation of the image to be processed according to the corresponding marking mode and marking precision.
Specifically, the application scene can include human body labeling and vehicle labeling, specifically, when the human body image needs to be labeled, the labeling can be divided into labeling the upper half part of the human body image, namely the head part, and also can be performed for the whole human body, and the labeled human body image can be applied to user identity auditing such as insurance claim settlement service. And under the scene of vehicle identification, the vehicle in the image can be labeled to obtain a clear vehicle image, and the labeled vehicle image can be applied to damage assessment business, vehicle insurance claim settlement business and the like of the vehicle.
The labeling mode represents a plurality of selectable labeling modes on the labeling page, and can be a plurality of labeling modes such as character labeling, arrow directional labeling or annotation frame indication labeling after the target object marking frame is framed. The annotation precision comprises a first annotation precision requirement, a second annotation precision requirement and a third annotation precision requirement, the annotation precision is reduced in sequence from the first annotation precision to the third annotation precision, and the annotation precision corresponding to the application scene of the image to be processed can be set according to actual requirements in different application scenes of the image to be processed.
And further, according to the marking mode and the corresponding marking precision, marking the image to be processed in the corresponding image application scene to be processed.
In the step, when a marking instruction of the image to be processed is received, the application scene of the image to be processed corresponding to the marking instruction is determined, the preset marking precision corresponding to the application scene of the image to be processed and the marking mode corresponding to the marking information are obtained, the image marking operation of the image to be processed is generated according to the corresponding marking mode and marking precision, the image marking operation corresponding to the preset marking precision is generated according to the marking modes meeting the requirements for different application scenes, and the user requirements are better met.
In one embodiment, before the sequentially connecting and generating the target object marking frames based on the extracted target marking poles meeting the preset condition, the method further includes:
acquiring actual scene data, inputting the actual scene data into a depth detection model, and generating prediction data;
screening the predicted data according to a preset screening rule to obtain fuzzy data which accords with the screening rule;
correcting the fuzzy data to generate training data;
training the deep learning model according to the training data to obtain a trained pole correction model;
and inputting the coordinate values of all the marking poles into the trained pole correction model, and outputting the corrected coordinate values of all the marking poles to obtain the target marking poles meeting preset conditions.
Specifically, the image to be processed is applied to the existing actual scene data in the scene, and is input into the depth detection model to obtain the predicted data, and the predicted data is screened by using the preset screening rule to obtain the fuzzy data meeting the preset screening rule, wherein the fuzzy data meeting one of the following two conditions is found from the prediction result by using an automatic program (if the meeting conditions are more, one part of the fuzzy data, such as 1000 pieces of fuzzy data, can be sampled first): firstly, the confidence of a prediction box is between 0.5 and 0.7; secondly, two prediction boxes with confidence coefficient above 0.7 are provided, and the overlapping rate (IOU) is between 0.3 and 0.5.
The Confidence interval (Confidence interval) of a probability sample is an interval estimation of a certain overall parameter of the sample, the Confidence interval shows the degree of the true value of the parameter with a certain probability of falling around the measurement result, and the credibility of the measured value of the measured parameter is given. The confidence level refers to the probability that the overall parameter value falls within a certain region of the sample statistic, and the confidence interval refers to the error range between the sample statistic and the overall parameter value at a certain confidence level. The larger the confidence interval, the higher the confidence level.
Further, the obtained fuzzy data are corrected to generate training data, and the deep learning model is trained according to the training data to obtain a pole correction model after training. And inputting the obtained coordinate values of the marking poles into the trained pole correction model, and outputting the corrected coordinate values of the marking poles to obtain the target marking poles meeting the preset conditions.
The method comprises the steps of obtaining actual scene data, inputting the actual scene data into a depth detection model to generate prediction data, screening the prediction data according to a preset screening rule to obtain fuzzy data meeting the screening rule, further correcting the fuzzy data to generate training data, training the depth detection model by utilizing the training data to obtain a trained pole correction model, correcting the obtained marking poles, outputting the target marking poles meeting requirements, and improving the accuracy of the obtained target marking poles.
In one embodiment, training the deep learning model according to the training data to obtain a trained pole correction model, includes:
iteratively extracting and correcting fuzzy data, and updating training data in a preset period;
and carrying out fine adjustment on the depth detection model according to the updated training data until no fuzzy data is generated, and obtaining a trained pole correction model.
The above steps, through repeatedly executing the extraction and the correction of the fuzzy data, realize the update and the increase of the training data, and utilize the updated training data, can realize the fine tuning of the depth detection model, gradually train the depth detection model, obtain the trained pole correction model, be used for realizing the correction of each obtained marking pole, output the target marking pole meeting the requirements, further improve the accuracy of each obtained target marking pole. It should be understood that although the various steps in the flow charts of fig. 1-2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 3, there is provided a target object labeling apparatus, including: a labeling instruction receiving module 302, a planar coordinate system constructing module 304, a target object marking frame generating module 306 and a target object labeling module 308, wherein:
the annotation instruction receiving module 302 is configured to, when receiving an annotation instruction, obtain an annotation operation on the image to be processed, where the annotation operation carries an annotation point identifier on the image to be processed, and obtain at least four annotation points.
And a plane coordinate system construction module 304, configured to set a plane coordinate system constructed by the first coordinate axis and the second coordinate axis, where each of the labeled points has a determined coordinate value in the plane coordinate system.
And a target object marking frame generating module 306, configured to sequentially connect and generate target object marking frames based on the extracted target marking poles meeting the preset condition.
And the target object labeling module 308 is configured to label the target object of the image to be processed by using the target object labeling frame.
The target object labeling device obtains the labeling operation of the image to be processed when receiving the labeling instruction and the labeling instruction, and the labeling operation carries the labeling point identification on the image to be processed to obtain at least four labeling points. And setting a plane coordinate system constructed by the first coordinate axis and the second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system. The target marking poles which meet the preset conditions and are extracted are sequentially connected to generate a target object marking frame, and then the target object marking frame is adopted, so that the target object marking of the image to be processed can be realized, the method is different from the mode that the object frame is generated by directly selecting the coordinates of the upper left corner and the lower right corner of the target in the prior art, the accuracy of the generated object marking frame is further improved, and the accuracy of framing and marking the target object can be improved.
In one embodiment, there is provided a target object labeling apparatus, further comprising a labeling pole amplifying module, configured to:
detecting a marking operation, wherein the marking operation carries the position of a current operating point; acquiring current image data corresponding to the position of a current operating point in an image to be processed, and amplifying and displaying the acquired current image data, wherein the position of the current operating point is positioned at the center of the current image data; detecting movement operation continuous with the labeling operation, acquiring a position coordinate after movement in the movement operation process, acquiring moved image data corresponding to the position coordinate after movement in an image to be processed, and amplifying and displaying the acquired moved image data, wherein the position coordinate after movement is positioned at the center position of the moved image data; and when the marking operation is finished, determining the position coordinates after the movement as marking points of the marking operation on the image to be processed.
The target object labeling device acquires the current image data corresponding to the current operating point position in the image to be processed, and the obtained current image data is amplified and displayed, and the position of the current operating point is positioned at the central position of the current image data, by detecting the continuous movement operation with the labeling operation, acquiring the moved image data corresponding to the moved position coordinate in the image to be processed, amplifying and displaying the acquired moved image data, wherein the moved position coordinate is positioned at the central position of the moved image data, and when the marking operation is finished, the position coordinates after the movement are determined as marking points of the marking operation on the image to be processed, and the obtained display size of the marking points is irrelevant to the amplifying operation and the moving operation in the marking operation, so that the display size of the marking points is prevented from being changed excessively, and the marking precision of the marking points can be improved.
In one embodiment, a target object labeling apparatus is provided, which further includes a labeling operation generating module, configured to:
when an annotation instruction of an image to be processed is received, acquiring annotation information corresponding to the annotation instruction, and generating annotation operation corresponding to the annotation information; the annotation information comprises an annotation mode, an application scene of the image to be processed and annotation precision.
In the target object labeling device, when a labeling instruction of an image to be processed is received, the labeling information corresponding to the labeling instruction is obtained, a labeling operation corresponding to the labeling information is generated, and the generated labeling operation is obtained and executed, so that the requirements of a user can be better met.
In one embodiment, the annotation operation generation module is further configured to:
when an annotation instruction of an image to be processed is received, determining an application scene of the image to be processed corresponding to the annotation instruction; the application scenes of the images to be processed comprise but are not limited to an insurance claim settlement service scene and a vehicle damage assessment service scene;
acquiring preset marking precision corresponding to the application scene of the image to be processed; the preset marking precision comprises a first marking precision, a second marking precision and a third marking precision; the marking precision is reduced in sequence from the first marking precision to the third marking precision;
acquiring a labeling mode corresponding to the labeling information; the marking mode comprises character marking, arrow marking and annotation frame marking;
and generating the picture marking operation of the image to be processed according to the corresponding marking mode and marking precision.
When receiving a marking instruction of an image to be processed, the marking operation generation module determines an application scene of the image to be processed corresponding to the marking instruction, acquires preset marking precision corresponding to the application scene of the image to be processed and a marking mode corresponding to marking information, generates a picture marking operation of the image to be processed according to the corresponding marking mode and marking precision, generates the picture marking operation corresponding to the preset marking precision according to the marking modes meeting requirements for different application scenes, and better meets the requirements of users.
In one embodiment, a target object labeling apparatus is provided, further comprising:
a target marking pole acquisition module, configured to:
training the deep learning model according to the training data to obtain a trained pole correction model; and inputting the coordinate values of the marking poles into the trained pole correction model, and outputting the corrected coordinate values of the marking poles to obtain the target marking poles meeting preset conditions.
The target marking pole obtaining module is used for training the deep learning model by utilizing training data to obtain a trained pole correction model, inputting the coordinate values of all the marking poles into the trained pole correction model, outputting the corrected coordinate values of all the marking poles to obtain target marking poles meeting preset conditions, realizing correction of all the marking poles, and further improving the accuracy of target object marking frames generated according to the target marking poles.
In one embodiment, a target object labeling apparatus is provided, further comprising:
a training data generation module to:
acquiring actual scene data, inputting the actual scene data into the depth detection model, and generating prediction data; screening the prediction data according to a preset screening rule to obtain fuzzy data which accords with the screening rule; and correcting the fuzzy data to generate training data.
The training data generation module generates prediction data by acquiring actual scene data and inputting the actual scene data into the depth detection model, and then screens the prediction data according to a preset screening rule to obtain fuzzy data conforming to the screening rule, and corrects the fuzzy data to generate training data, so that a pole correction model can be better obtained by training according to the training data.
In one embodiment, a target object labeling apparatus is provided, further comprising:
a pole correction model generation module to:
iteratively executing the extraction and the correction of the fuzzy data, and updating the training data in a preset period; and carrying out fine adjustment on the depth detection model according to the updated training data until the fuzzy data are not generated, so as to obtain a trained pole correction model.
The pole correction model generation module realizes updating and increasing of training data by repeatedly executing extraction and correction of fuzzy data, can realize fine adjustment of the depth detection model by using the updated training data, gradually trains the depth detection model to obtain the trained pole correction model, is used for realizing correction of each obtained marking pole, outputs a target marking pole meeting the requirement, and further improves the accuracy of each obtained target marking pole.
For specific limitations of the target object labeling apparatus, reference may be made to the above limitations of the target object labeling method, which is not described herein again. All or part of each module in the target object labeling device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a target object labeling method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program:
when an annotation instruction is received, acquiring annotation operation of an image to be processed, wherein the annotation operation carries an annotation point identifier on the image to be processed, and acquiring at least four annotation points;
setting a plane coordinate system constructed by a first coordinate axis and a second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system;
sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions;
and marking the target object for the image to be processed by adopting the target object marking frame.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
detecting a marking operation, wherein the marking operation carries the position of a current operating point;
acquiring current image data corresponding to the position of a current operating point in an image to be processed, and amplifying and displaying the acquired current image data, wherein the position of the current operating point is positioned at the center of the current image data;
detecting movement operation continuous with the labeling operation, acquiring a position coordinate after movement in the movement operation process, acquiring moved image data corresponding to the position coordinate after movement in an image to be processed, and amplifying and displaying the acquired moved image data, wherein the position coordinate after movement is positioned at the center position of the moved image data;
and when the marking operation is finished, determining the position coordinates after the movement as marking points of the marking operation on the image to be processed.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when an annotation instruction of an image to be processed is received, acquiring annotation information corresponding to the annotation instruction, and generating annotation operation corresponding to the annotation information; the annotation information comprises an annotation mode, an application scene of the image to be processed and annotation precision.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when an annotation instruction of an image to be processed is received, determining an application scene of the image to be processed corresponding to the annotation instruction; the application scenes of the images to be processed comprise but are not limited to an insurance claim settlement service scene and a vehicle damage assessment service scene;
acquiring preset marking precision corresponding to the application scene of the image to be processed; the preset marking precision comprises a first marking precision, a second marking precision and a third marking precision; the marking precision is reduced in sequence from the first marking precision to the third marking precision;
acquiring a labeling mode corresponding to the labeling information; the marking mode comprises character marking, arrow marking and annotation frame marking;
and generating the picture marking operation of the image to be processed according to the corresponding marking mode and marking precision.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
training the deep learning model according to the training data to obtain a trained pole correction model;
and inputting the coordinate values of the marking poles into the trained pole correction model, and outputting the corrected coordinate values of the marking poles to obtain the target marking poles meeting preset conditions.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring actual scene data, inputting the actual scene data into the depth detection model, and generating prediction data;
screening the prediction data according to a preset screening rule to obtain fuzzy data which accords with the screening rule;
and correcting the fuzzy data to generate training data.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
iteratively executing the extraction and the correction of the fuzzy data, and updating the training data in a preset period;
and carrying out fine adjustment on the depth detection model according to the updated training data until the fuzzy data are not generated, so as to obtain a trained pole correction model.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
when an annotation instruction is received, acquiring annotation operation of an image to be processed, wherein the annotation operation carries an annotation point identifier on the image to be processed, and acquiring at least four annotation points;
setting a plane coordinate system constructed by a first coordinate axis and a second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system;
sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions;
and marking the target object for the image to be processed by adopting the target object marking frame.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting a marking operation, wherein the marking operation carries the position of a current operating point;
acquiring current image data corresponding to the position of a current operating point in an image to be processed, and amplifying and displaying the acquired current image data, wherein the position of the current operating point is positioned at the center of the current image data;
detecting movement operation continuous with the labeling operation, acquiring a position coordinate after movement in the movement operation process, acquiring moved image data corresponding to the position coordinate after movement in an image to be processed, and amplifying and displaying the acquired moved image data, wherein the position coordinate after movement is positioned at the center position of the moved image data;
and when the marking operation is finished, determining the position coordinates after the movement as marking points of the marking operation on the image to be processed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when an annotation instruction of an image to be processed is received, acquiring annotation information corresponding to the annotation instruction, and generating annotation operation corresponding to the annotation information; the annotation information comprises an annotation mode, an application scene of the image to be processed and annotation precision.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when an annotation instruction of an image to be processed is received, determining an application scene of the image to be processed corresponding to the annotation instruction; the application scenes of the images to be processed comprise but are not limited to an insurance claim settlement service scene and a vehicle damage assessment service scene;
acquiring preset marking precision corresponding to the application scene of the image to be processed; the preset marking precision comprises a first marking precision, a second marking precision and a third marking precision; the marking precision is reduced in sequence from the first marking precision to the third marking precision;
acquiring a labeling mode corresponding to the labeling information; the marking mode comprises character marking, arrow marking and annotation frame marking;
and generating the picture marking operation of the image to be processed according to the corresponding marking mode and marking precision.
In one embodiment, the computer program when executed by the processor further performs the steps of:
training the deep learning model according to the training data to obtain a trained pole correction model;
and inputting the coordinate values of the marking poles into the trained pole correction model, and outputting the corrected coordinate values of the marking poles to obtain the target marking poles meeting preset conditions.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring actual scene data, inputting the actual scene data into the depth detection model, and generating prediction data;
screening the prediction data according to a preset screening rule to obtain fuzzy data which accords with the screening rule;
and correcting the fuzzy data to generate training data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
iteratively executing the extraction and the correction of the fuzzy data, and updating the training data in a preset period;
and carrying out fine adjustment on the depth detection model according to the updated training data until the fuzzy data are not generated, so as to obtain a trained pole correction model.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A target object labeling method, the method comprising:
when an annotation instruction is received, acquiring annotation operation of an image to be processed, wherein the annotation operation carries an annotation point identifier on the image to be processed, and acquiring at least four annotation points;
setting a plane coordinate system constructed by a first coordinate axis and a second coordinate axis, wherein each marking point has a determined coordinate value in the plane coordinate system;
sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions;
and marking the target object for the image to be processed by adopting the target object marking frame.
2. The method according to claim 1, wherein the obtaining of the annotation operation on the image to be processed, where the annotation operation carries an annotation point identifier on the image to be processed, comprises:
detecting a marking operation, wherein the marking operation carries the position of a current operating point;
acquiring current image data corresponding to the current operating point position in the image to be processed, and amplifying and displaying the acquired current image data, wherein the current operating point position is located at the center position of the current image data;
detecting a moving operation continuous with the labeling operation, acquiring a position coordinate after moving in the moving operation process, acquiring moving image data corresponding to the position coordinate after moving in the image to be processed, and amplifying and displaying the acquired moving image data, wherein the position coordinate after moving is located at the center position of the moving image data;
and when the marking operation is finished, determining the position coordinate after the movement as a marking point of the marking operation on the image to be processed.
3. The method according to claim 1, wherein before the obtaining of the annotation operation on the image to be processed, the method further comprises:
when an annotation instruction of an image to be processed is received, acquiring annotation information corresponding to the annotation instruction, and generating annotation operation corresponding to the annotation information; the annotation information comprises an annotation mode, an application scene of the image to be processed and annotation precision.
4. The method according to claim 3, wherein when receiving an annotation instruction of an image to be processed, acquiring annotation information corresponding to the annotation instruction, and generating an annotation operation corresponding to the annotation information, comprises:
when an annotation instruction of an image to be processed is received, determining an application scene of the image to be processed corresponding to the annotation instruction; the application scenes of the images to be processed comprise but are not limited to an insurance claim settlement service scene and a vehicle damage assessment service scene;
acquiring preset marking precision corresponding to the application scene of the image to be processed; the preset marking precision comprises a first marking precision, a second marking precision and a third marking precision; the marking precision is reduced in sequence from the first marking precision to the third marking precision;
acquiring a labeling mode corresponding to the labeling information; the marking mode comprises character marking, arrow marking and annotation frame marking;
and generating the picture marking operation of the image to be processed according to the corresponding marking mode and marking precision.
5. The method according to claim 1, before the sequentially connecting and generating the target object marking frames based on the extracted target marking poles satisfying the preset condition, further comprising:
training the deep learning model according to the training data to obtain a trained pole correction model;
and inputting the coordinate values of the marking poles into the trained pole correction model, and outputting the corrected coordinate values of the marking poles to obtain the target marking poles meeting preset conditions.
6. The method of claim 5, wherein before the training the deep learning model according to the training data to obtain the trained pole correction model, further comprising:
acquiring actual scene data, inputting the actual scene data into the depth detection model, and generating prediction data;
screening the prediction data according to a preset screening rule to obtain fuzzy data which accords with the screening rule;
and correcting the fuzzy data to generate training data.
7. The method of claim 5, wherein training the deep learning model according to the training data to obtain a trained pole correction model comprises:
iteratively executing the extraction and the correction of the fuzzy data, and updating the training data in a preset period;
and carrying out fine adjustment on the depth detection model according to the updated training data until the fuzzy data are not generated, so as to obtain a trained pole correction model.
8. A target object labeling apparatus, the apparatus comprising:
the annotation instruction receiving module is used for acquiring annotation operation of the image to be processed when an annotation instruction is received, wherein the annotation operation carries an annotation point identifier on the image to be processed, and at least four annotation points are acquired;
the plane coordinate system construction module is used for setting a plane coordinate system constructed by a first coordinate axis and a second coordinate axis, and each marking point has a determined coordinate value in the plane coordinate system;
the target object marking frame generating module is used for sequentially connecting and generating target object marking frames based on the extracted target marking poles meeting the preset conditions;
and the target object labeling module is used for labeling the target object of the image to be processed by adopting the target object labeling frame.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201910882199.8A 2019-09-18 2019-09-18 Target object labeling method, device, computer equipment and storage medium Active CN110751149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910882199.8A CN110751149B (en) 2019-09-18 2019-09-18 Target object labeling method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910882199.8A CN110751149B (en) 2019-09-18 2019-09-18 Target object labeling method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110751149A true CN110751149A (en) 2020-02-04
CN110751149B CN110751149B (en) 2023-12-22

Family

ID=69276625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910882199.8A Active CN110751149B (en) 2019-09-18 2019-09-18 Target object labeling method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110751149B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310667A (en) * 2020-02-18 2020-06-19 北京小马慧行科技有限公司 Method, device, storage medium and processor for determining whether annotation is accurate
CN111460199A (en) * 2020-03-02 2020-07-28 广州文远知行科技有限公司 Data association method and device, computer equipment and storage medium
CN111583209A (en) * 2020-04-29 2020-08-25 上海杏脉信息科技有限公司 Brain perfusion image feature point selection method, medium and electronic equipment
CN111860484A (en) * 2020-07-22 2020-10-30 腾讯科技(深圳)有限公司 Region labeling method, device, equipment and storage medium
CN112016053A (en) * 2020-08-25 2020-12-01 北京金山云网络技术有限公司 Assessment method and device for data annotation and electronic equipment
CN112149561A (en) * 2020-09-23 2020-12-29 杭州睿琪软件有限公司 Image processing method and apparatus, electronic device, and storage medium
CN113420753A (en) * 2021-07-13 2021-09-21 杭州海康威视数字技术股份有限公司 Target object frame selection area generation method and device
CN113781607A (en) * 2021-09-17 2021-12-10 平安科技(深圳)有限公司 Method, device and equipment for processing annotation data of OCR (optical character recognition) image and storage medium
CN114299143A (en) * 2021-12-27 2022-04-08 浙江大华技术股份有限公司 Method and device for marking coordinate points in image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4175328A (en) * 1976-07-12 1979-11-27 Helmut Kellner Arrangement for producing photographic pictures suitable for photogrammetric survey of spatial objects
US20160239975A1 (en) * 2014-08-20 2016-08-18 Shenzhen University Highly robust mark point decoding method and system
CN108108443A (en) * 2017-12-21 2018-06-01 深圳市数字城市工程研究中心 Character marking method of street view video, terminal equipment and storage medium
CN109727312A (en) * 2018-12-10 2019-05-07 广州景骐科技有限公司 Point cloud mask method, device, computer equipment and storage medium
CN109831616A (en) * 2017-11-23 2019-05-31 上海未来伙伴机器人有限公司 A kind of face follower method and its device based on monocular cam
CN109934931A (en) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 Acquisition image, the method and device for establishing target object identification model
CN110232311A (en) * 2019-04-26 2019-09-13 平安科技(深圳)有限公司 Dividing method, device and the computer equipment of hand images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4175328A (en) * 1976-07-12 1979-11-27 Helmut Kellner Arrangement for producing photographic pictures suitable for photogrammetric survey of spatial objects
US20160239975A1 (en) * 2014-08-20 2016-08-18 Shenzhen University Highly robust mark point decoding method and system
CN109831616A (en) * 2017-11-23 2019-05-31 上海未来伙伴机器人有限公司 A kind of face follower method and its device based on monocular cam
CN109934931A (en) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 Acquisition image, the method and device for establishing target object identification model
CN108108443A (en) * 2017-12-21 2018-06-01 深圳市数字城市工程研究中心 Character marking method of street view video, terminal equipment and storage medium
CN109727312A (en) * 2018-12-10 2019-05-07 广州景骐科技有限公司 Point cloud mask method, device, computer equipment and storage medium
CN110232311A (en) * 2019-04-26 2019-09-13 平安科技(深圳)有限公司 Dividing method, device and the computer equipment of hand images

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310667B (en) * 2020-02-18 2023-09-01 北京小马慧行科技有限公司 Method, device, storage medium and processor for determining whether annotation is accurate
CN111310667A (en) * 2020-02-18 2020-06-19 北京小马慧行科技有限公司 Method, device, storage medium and processor for determining whether annotation is accurate
CN111460199A (en) * 2020-03-02 2020-07-28 广州文远知行科技有限公司 Data association method and device, computer equipment and storage medium
CN111460199B (en) * 2020-03-02 2024-02-23 广州文远知行科技有限公司 Data association method, device, computer equipment and storage medium
CN111583209A (en) * 2020-04-29 2020-08-25 上海杏脉信息科技有限公司 Brain perfusion image feature point selection method, medium and electronic equipment
CN111583209B (en) * 2020-04-29 2021-11-02 上海杏脉信息科技有限公司 Brain perfusion image feature point selection method, medium and electronic equipment
CN111860484A (en) * 2020-07-22 2020-10-30 腾讯科技(深圳)有限公司 Region labeling method, device, equipment and storage medium
CN111860484B (en) * 2020-07-22 2023-11-03 腾讯科技(深圳)有限公司 Region labeling method, device, equipment and storage medium
CN112016053A (en) * 2020-08-25 2020-12-01 北京金山云网络技术有限公司 Assessment method and device for data annotation and electronic equipment
CN112149561A (en) * 2020-09-23 2020-12-29 杭州睿琪软件有限公司 Image processing method and apparatus, electronic device, and storage medium
CN112149561B (en) * 2020-09-23 2024-04-16 杭州睿琪软件有限公司 Image processing method and device, electronic equipment and storage medium
CN113420753A (en) * 2021-07-13 2021-09-21 杭州海康威视数字技术股份有限公司 Target object frame selection area generation method and device
CN113420753B (en) * 2021-07-13 2024-01-05 杭州海康威视数字技术股份有限公司 Method and device for generating target object frame selection area
CN113781607A (en) * 2021-09-17 2021-12-10 平安科技(深圳)有限公司 Method, device and equipment for processing annotation data of OCR (optical character recognition) image and storage medium
CN113781607B (en) * 2021-09-17 2023-09-19 平安科技(深圳)有限公司 Processing method, device, equipment and storage medium for labeling data of OCR (optical character recognition) image
CN114299143A (en) * 2021-12-27 2022-04-08 浙江大华技术股份有限公司 Method and device for marking coordinate points in image

Also Published As

Publication number Publication date
CN110751149B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN110751149B (en) Target object labeling method, device, computer equipment and storage medium
CN109117831B (en) Training method and device of object detection network
WO2022213879A1 (en) Target object detection method and apparatus, and computer device and storage medium
CN109343851B (en) Page generation method, page generation device, computer equipment and storage medium
CN111950329A (en) Target detection and model training method and device, computer equipment and storage medium
WO2021012382A1 (en) Method and apparatus for configuring chat robot, computer device and storage medium
CN109858333B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
US20210295015A1 (en) Method and apparatus for processing information, device, and medium
CN109740487B (en) Point cloud labeling method and device, computer equipment and storage medium
CN109492531B (en) Face image key point extraction method and device, storage medium and electronic equipment
CN108304243B (en) Interface generation method and device, computer equipment and storage medium
CN110059623B (en) Method and apparatus for generating information
CN109710866B (en) Method and device for displaying pictures in online document
CN111311485A (en) Image processing method and related device
CN111832561B (en) Character sequence recognition method, device, equipment and medium based on computer vision
CN112700454A (en) Image cropping method and device, electronic equipment and storage medium
CN114723646A (en) Image data generation method with label, device, storage medium and electronic equipment
CN111223155B (en) Image data processing method, device, computer equipment and storage medium
CN113033377A (en) Character position correction method, character position correction device, electronic equipment and storage medium
CN111583264A (en) Training method for image segmentation network, image segmentation method, and storage medium
CN114549849A (en) Image recognition method and device, computer equipment and storage medium
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN113538291B (en) Card image inclination correction method, device, computer equipment and storage medium
CN114238541A (en) Sensitive target information acquisition method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant