CN111143912B - Display labeling method and related product - Google Patents

Display labeling method and related product Download PDF

Info

Publication number
CN111143912B
CN111143912B CN201911269065.5A CN201911269065A CN111143912B CN 111143912 B CN111143912 B CN 111143912B CN 201911269065 A CN201911269065 A CN 201911269065A CN 111143912 B CN111143912 B CN 111143912B
Authority
CN
China
Prior art keywords
target
data set
image
labeling
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911269065.5A
Other languages
Chinese (zh)
Other versions
CN111143912A (en
Inventor
李晨楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanyi Digital Technology Co ltd
Original Assignee
Wanyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanyi Technology Co Ltd filed Critical Wanyi Technology Co Ltd
Priority to CN201911269065.5A priority Critical patent/CN111143912B/en
Publication of CN111143912A publication Critical patent/CN111143912A/en
Application granted granted Critical
Publication of CN111143912B publication Critical patent/CN111143912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a display labeling method and a related product, wherein the method comprises the following steps: the method comprises the steps of determining an annotated data set and a non-annotated data set in a first image, generating a CAD image based on the annotated data set and the non-annotated data set, wherein the CAD image comprises a target annotated data set, the target annotated data set is annotated data after annotation is completed, the reasonability of the target annotated data set is judged, and if the target annotated data set is reasonable, the CAD image is displayed.

Description

Display labeling method and related product
Technical Field
The application relates to the technical field of computers, in particular to a display labeling method and a related product.
Background
At present, some architectural designers prefer to print an online electronic design drawing into a paper drawing and mark the paper drawing online, that is, the paper drawing is marked, so that synchronization of the online electronic design drawing cannot be realized, and other technicians (constructors or other designers) working at increasingly higher technological levels are undoubtedly inconvenient.
Disclosure of Invention
The embodiment of the application provides a display labeling method and a related product, which can realize the synchronization of online and offline labeled data and non-labeled data and are also beneficial to improving the user experience.
In a first aspect, an embodiment of the present application provides a display labeling method, which is applied to an electronic device, where the electronic device includes at least one image processing unit, and the method includes:
determining an annotated dataset and a non-annotated dataset in a first image;
generating a CAD image based on the annotated data set and the non-annotated data set, wherein the CAD image comprises a target annotated data set which is annotated data after annotation is completed;
judging the rationality of the target labeling data set;
and if the target labeling data set is reasonable, displaying the CAD image.
In a second aspect, an embodiment of the present application provides a display labeling apparatus, which is applied to an electronic device, and the apparatus includes: a determining unit, a generating unit, a judging unit and a displaying unit, wherein,
the determining unit is used for determining an annotated data set and a non-annotated data set in the first image;
the generating unit is used for generating a CAD image based on the annotated data set and the non-annotated data set, wherein the CAD image comprises a target annotated data set, and the target annotated data set is annotated data after annotation is completed;
the judging unit is used for judging the reasonability of the target labeling data set;
and the display unit is used for displaying the CAD image if the target labeling data set is reasonable.
In a third aspect, the present application provides an electronic device, comprising: a processor and a memory; and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for some or all of the steps as described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, where the computer program is used to make a computer execute some or all of the steps described in the first aspect of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
the display labeling method and the related product described in the embodiment of the application can be applied to electronic equipment, the electronic equipment can determine a labeled data set and a non-labeled data set in a first image, a CAD image is generated based on the labeled data set and the non-labeled data set, the CAD image comprises a target labeled data set, the target labeled data set is labeled data after labeling is completed, the reasonability of the target labeled data set is judged, and if the target labeled data set is reasonable, the CAD image is displayed, so that the online and offline synchronization of the labeled data and the non-labeled data can be realized, and the improvement of user experience is facilitated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart illustrating an embodiment of a display labeling method according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating an embodiment of a display labeling method according to an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating an embodiment of a display labeling method according to an embodiment of the present application;
FIG. 4A is a schematic structural diagram illustrating an embodiment of a display labeling apparatus according to an embodiment of the present application;
fig. 4B is a schematic structural diagram of an embodiment of a display labeling apparatus provided in the present application;
fig. 5 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to better understand the display labeling method and the related products provided in the embodiments of the present application, a system architecture of the display labeling method applicable to the embodiments of the present application is described below.
The electronic device related to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices (smart watch, smart band, wireless headset, augmented reality/virtual reality device, smart glasses), computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), mobile Station (MS), electronic device (terminal), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices, and in addition, the electronic devices in the embodiments of the present application may also be servers.
Please refer to fig. 1, which is a flowchart illustrating an embodiment of a tagging method according to an embodiment of the present application. The display labeling method described in this embodiment is applied to an electronic device, and includes the following steps:
101. an annotated dataset and a non-annotated dataset in the first image are determined.
The first image may be a paper image, an electronic image, and the like, which is not limited herein; the picture information corresponding to the non-annotated data set in the first image may include any one of the following: the first image may include an annotated data set and a non-annotated data set, the annotated data set may include a plurality of annotated data, and the annotated data may refer to annotated data made by a user on the first image, for example, the annotated data may be marks such as points, line segments, curves, or words, and the like, and is not limited herein.
In one possible example, the step 101 of determining the annotated data set and the non-annotated data set in the first image may comprise the steps of:
11. scanning the first image to obtain a second image;
12. performing image cutting on the second image to obtain a plurality of marked area images and a plurality of non-marked area images;
13. determining a plurality of labeling information corresponding to the plurality of labeling area images, wherein each labeling area corresponds to one labeling information;
14. determining a plurality of non-labeling information corresponding to the plurality of non-labeling area images, wherein each non-labeling area corresponds to one piece of non-labeling information;
15. classifying the plurality of labeling information and the plurality of non-labeling area information according to a preset mode to obtain a plurality of labeling data corresponding to a plurality of first categories and a plurality of non-labeling data corresponding to a plurality of second categories, forming the plurality of labeling data into a labeling data set, and forming the plurality of non-labeling data into the non-labeling data set.
The first image may be a paper image, the electronic device may include a camera, the first image may be scanned by the camera to obtain a second image, the second image may be an electronic image, the second image may include annotation data and non-annotation data, the preset manner may be set by a user or default to a system, a plurality of first categories, such as points, lines, line segments, rectangular frames, etc., may be preset for the annotation information, and a plurality of second categories, such as windows, walls, roofs, beams, etc., may be preset for the non-annotation information, which is not limited herein.
In a specific implementation, the second image may be subjected to image segmentation, and the second image is divided into a plurality of sub-images, where a non-labeled region and a labeled region in the second image may be in a cross distribution, and therefore, the sub-images may include a plurality of labeled region images and a plurality of non-labeled region images, so as to determine labeled information or non-labeled information in each labeled region image or non-labeled region image, and obtain a plurality of labeled data corresponding to a first category and a plurality of non-labeled data corresponding to a second category by classifying the labeled information and the non-labeled information, where the labeled data or the non-labeled data may be points, line segments, planes, curves, and the like, and is not limited herein, or the non-labeled data may be doors, windows, and the like formed by points, lines, planes, and the like, and the plurality of labeled data forms a labeled data set, and the plurality of non-labeled data forms a non-labeled data set, and the plurality of non-labeled data and the non-labeled data set may improve the recognition rate of the labeled data and the non-labeled data in the image.
102. And generating a CAD image based on the annotated data set and the non-annotated data set, wherein the CAD image comprises a target annotated data set, and the target annotated data set is annotated data after annotation is completed.
The CAD (Computer Aided Design) image may refer to an image that can be recognized by an automatic Computer Aided Design software (e.g., autoCAD), so that both the labeled data set and the non-labeled data set in the first image may be converted into data that can be recognized by the Computer Aided Design software, and when the CAD image is generated, labeled data after labeling is completed and a target labeled data set may be generated by recognizing the non-labeled data set.
In one possible example, the step 102 of generating a CAD image based on the annotated data set and the non-annotated data set may include the steps of:
21. on the basis of the non-labeled data set, matching a plurality of data formats corresponding to a plurality of non-labeled data corresponding to the non-labeled data set with a plurality of data formats in a preset CAD component set respectively to obtain a plurality of first matching degrees, wherein the preset CAD component set comprises a plurality of preset CAD components;
22. selecting a preset CAD component corresponding to the first matching degree exceeding a preset matching threshold value from the plurality of first matching degrees to obtain a plurality of target CAD components;
23. acquiring target data corresponding to the plurality of target CAD components, and generating target non-annotated images;
24. determining a plurality of pixel coordinates corresponding to a plurality of pixel points corresponding to each marked region image in the plurality of marked region images to obtain a plurality of pixel coordinate sets;
25. projecting the labeling data set into the target non-labeling image based on the plurality of pixel coordinate sets to obtain a plurality of target pixel coordinate sets;
26. and based on the multiple target pixel coordinate sets, marking the marked data set into the target non-marked image according to a preset marking mode to obtain the CAD image, wherein the CAD image comprises a target marked data set.
The electronic device may preset a CAD component set, where the preset CAD component set may include preset CAD components corresponding to a plurality of data formats, and the preset CAD components may be points, lines, curves, or the like, or doors, windows, or the like formed by the points, lines, curves, or the like, and are not limited herein, for example, may set, for a data format in which a component is a straight line, to LN NAME pt1, where pt1 is a NAME of a point forming the straight line, and a specific setting manner is not limited herein; the preset matching threshold may be set by the user or default by the system, and is not limited herein.
In a specific implementation, a plurality of data formats corresponding to a plurality of non-labeled data in the non-labeled data may be matched with a plurality of data formats preset in a preset CAD component set to obtain a plurality of first matching degrees, each first matching degree may correspond to one preset CAD component, in addition, one CAD component may be composed of the at least one non-labeled data, when the first matching degree exceeds a preset matching threshold, it indicates that a difference between the plurality of corresponding data formats and the plurality of preset data formats is within a controllable range, it may be considered that a component corresponding to the plurality of data formats corresponding to the plurality of non-labeled data and a preset component corresponding to the plurality of data formats in the preset CAD component set are the same component, therefore, a preset CAD component corresponding to a first matching degree exceeding the preset matching threshold may be selected from the plurality of first matching degrees to obtain a plurality of target CAD components, the plurality of target CAD components may compose an image corresponding to the non-labeled data set, and then may obtain a plurality of target image corresponding to the non-labeled.
Further, a plurality of labeled data corresponding to the labeled data set may be labeled to the target non-labeled image in a coordinate positioning manner to obtain a CAD image, specifically, a plurality of pixel coordinate sets may be obtained by determining a plurality of pixel coordinates corresponding to a plurality of pixel points in each labeled region image in the plurality of labeled region images, each pixel coordinate set may correspond to one labeled region image, then, a pixel coordinate system corresponding to a plurality of pixel points in the labeled region image may be determined based on the plurality of pixel coordinate sets, or a pixel coordinate system corresponding to each pixel coordinate set in the plurality of pixel coordinate sets may be obtained, and a target pixel coordinate system corresponding to the target non-labeled image may be determined, so that, based on the plurality of pixel coordinate sets and the target pixel coordinate system, the labeled data set may be projected to the target non-labeled image to obtain a plurality of target pixel coordinate sets, wherein each target pixel coordinate set may correspond to one labeled region image.
Finally, the preset labeling manner may be set by a user or default, for example, the labeling data may be set to a format or color different from that of the non-labeling data, and the labeling data set may be labeled to the target non-labeling image according to the preset labeling manner based on the plurality of target pixel coordinate sets to obtain the CAD image, where the CAD image may include the target labeling data set, and the target labeling data set may include the labeling data after the labeling is completed.
103. And judging the rationality of the target labeling data set.
After the annotation data is annotated into the CAD image, the reasonability of the target annotation data set can be judged, and the reasonability can be understood as the correctness of the annotation manner of the user in the first image, for example, the correct annotation for the door and the window should be respectively annotated as the letters m and c, but the user manually annotates the error and annotates the door as the c, so that the annotation for the door in the target annotation data set can be considered unreasonable at this time, and thus, the correctness of the annotation data in the CAD image can be determined by judging the reasonability of the target annotation data set, which is beneficial to increasing the readability of the CAD image.
In a possible example, the step 103 of determining the reasonableness of the target annotation data set may include the following steps:
31. inputting the target labeling data set into a preset neural network model to obtain a plurality of pieces of characteristic information corresponding to the target labeling data set and a plurality of second matching degrees corresponding to the plurality of pieces of characteristic information and a plurality of preset mark types, wherein each preset mark type corresponds to one second matching degree;
32. calculating the average value of the plurality of second matching degrees to obtain an average value;
33. if the average value is larger than or equal to a preset threshold value, determining that the target labeling data set is reasonable;
34. and if the average value is smaller than the preset threshold value, determining that the target annotation data set is unreasonable.
The preset threshold may be set by the user or default to the system, the preset neural network model may be set by the user or default to the system, without limitation, the preset neural network model may be a convolutional neural network, the preset flag type may be set by the user or default to the system, and the preset flag type may include at least one of the following types: dimensions, linearity, arc length, angles, labeling boxes, text labels, and the like, without limitation.
In a specific implementation, the target annotation data set may be input into a preset neural network model to obtain a plurality of feature information and a plurality of second matching degrees corresponding to the plurality of feature information and a plurality of preset mark types, the plurality of second matching degrees may be used to judge the rationality of the target annotation data set, an average value may be obtained by calculating a mean value corresponding to the plurality of second matching degrees, if the average value is greater than or equal to a preset threshold, it may be determined that the plurality of feature information corresponding to the target annotation data set matches a preset mark type, that is, the target annotation data set is rational, otherwise, if the average value is less than the preset threshold, it may be determined that the annotated data corresponding to the target annotation data set is incorrect, that is, the target annotation data set is unreasonable, and thus, the rationality of the target annotation data set may be judged by using the preset neural network model to improve the correctness of the annotated data in the CAD image.
Optionally, after the step 103, after the determining the reasonableness of the target annotation data set, the method may further include the following steps:
a1, if the target annotation data set is unreasonable, acquiring an annotation position of the target annotation data set in the CAD image;
a2, determining a target video corresponding to a preset marking position based on a mapping relation between the preset marking position and a preset video;
and A3, pushing the target video, wherein the target video is used for demonstrating a correct labeling method.
After the target annotation data is judged to be unreasonable, in order to improve the correctness of the annotation data, a preset video can be pushed for the target annotation data set, the preset video can be set by a user or defaulted by a system, the preset video can be used for demonstrating a correct annotation mode to help the user to correctly annotate the annotation data in the target annotation data set, in addition, an annotation position can be preset in the electronic equipment, for example, an annotation position can be preset for a door frame in the CAD image, the preset annotation position represents the position of the door frame in the CAD image, the mapping relation between the preset annotation position and the preset video can be preset by the electronic equipment, and the corresponding correctly annotated preset video can be associated based on the preset annotation position.
In a specific implementation, a labeling position of the labeling data in the target labeling data set in the CAD image may be obtained, where the labeling position may be a coordinate set formed by a plurality of pixel coordinates of the labeling data corresponding to the target labeling data set, and a position of the labeling data in the target labeling data set may be determined by the coordinate set, and further, the electronic device may determine a target video corresponding to the labeling position based on the mapping relationship, where the target video is used to demonstrate a correct labeling method to remind a user to correct the target labeling data, and may also push the target video based on a click position of the user, for example, when the electronic device detects a pressing operation of the user on the CAD image on the screen, the electronic device may directly push the target video corresponding to the pressing position, and based on the target video, the user may correctly label the data, so that readability of the entire CAD image may be increased.
104. And if the target labeling data set is reasonable, displaying the CAD image.
After the target annotation data set is judged, the electronic equipment can display the CAD image, the CAD image can include data included in the target annotation data set and the non-annotation data set, that is, the CAD image can include images such as an indoor design drawing, a building construction drawing, a construction cost drawing, an electrical drawing, an automobile model drawing and the like, and can also include annotation data made aiming at the images, so that the improvement of user experience is facilitated, and on-line and off-line annotation synchronization can be realized.
The display labeling method provided by the embodiment of the application is applied to electronic equipment, the electronic equipment can determine a labeled data set and a non-labeled data set in a first image, a CAD image is generated based on the labeled data set and the non-labeled data set, the CAD image comprises a target labeled data set, the target labeled data set is labeled data after labeling is completed, the reasonability of the target labeled data set is judged, and if the target labeled data set is reasonable, the CAD image is displayed.
In accordance with the above, please refer to fig. 2, which is a flowchart illustrating an embodiment of a display tagging method according to an embodiment of the present application. The display labeling method described in this embodiment is applied to an electronic device, and includes the following steps:
201. an annotated dataset and a non-annotated dataset in the first image are determined.
202. And on the basis of the non-labeled data set, matching a plurality of data formats corresponding to a plurality of non-labeled data corresponding to the non-labeled data set with a plurality of data formats in a preset CAD component set respectively to obtain a plurality of first matching degrees, wherein the preset CAD component set comprises a plurality of preset CAD components.
203. And selecting a preset CAD component corresponding to the first matching degree exceeding a preset matching threshold value from the plurality of first matching degrees to obtain a plurality of target CAD components.
204. And acquiring target data corresponding to the plurality of target CAD components to generate target non-annotated images.
205. And determining a plurality of pixel coordinates corresponding to a plurality of pixel points corresponding to each marked region image in the plurality of marked region images to obtain a plurality of pixel coordinate sets.
206. And projecting the labeling data set into the target non-labeling image based on the plurality of pixel coordinate sets to obtain a plurality of target pixel coordinate sets.
207. And based on the multiple target pixel coordinate sets, marking the marked data set into the target non-marked image according to a preset marking mode to obtain the CAD image, wherein the CAD image comprises a target marked data set.
208. And judging the rationality of the target labeling data set.
209. And if the target labeling data set is reasonable, displaying the CAD image.
Optionally, the detailed description of the steps 201 to 209 may refer to the corresponding steps from the step 101 to the step 104 of the annotation displaying method described in fig. 1, and will not be described herein again.
The display labeling method provided by the embodiment of the application is applied to electronic equipment, the electronic equipment can determine a labeled data set and a non-labeled data set in a first image, on the basis of the non-labeled data set, a plurality of data formats corresponding to a plurality of non-labeled data corresponding to the non-labeled data set are respectively matched with a plurality of data formats in a preset CAD component set to obtain a plurality of first matching degrees, the preset CAD component set comprises a plurality of preset CAD components, the preset CAD components corresponding to the first matching degrees exceeding a preset matching threshold value in the plurality of first matching degrees are selected to obtain a plurality of target CAD components, target data corresponding to the plurality of target CAD components are obtained, a target non-labeled image is generated, a plurality of pixel coordinates corresponding to a plurality of pixel points of each labeled area image in the plurality of labeled area images are determined to obtain a plurality of pixel coordinate sets, the labeled data sets are projected into the target non-labeled image to obtain a plurality of target pixel coordinate sets, on the basis of the plurality of pixel coordinate sets, the labeled data sets are displayed on the target non-labeled image reasonably according to a preset CAD data set, and the labeled data set is beneficial to improving the non-labeled data experience of the target CAD images.
In accordance with the above, please refer to fig. 3, which is a flowchart illustrating an embodiment of a display tagging method according to an embodiment of the present application. The display labeling method described in this embodiment is applied to an electronic device, and includes the following steps:
301. an annotated dataset and a non-annotated dataset in the first image are determined.
302. And generating a CAD image based on the annotated data set and the non-annotated data set, wherein the CAD image comprises a target annotated data set, and the target annotated data set is annotated data after annotation is completed.
303. And judging the rationality of the target labeling data set.
304. And if the target labeling data set is reasonable, displaying the CAD image.
305. And if the target annotation data set is unreasonable, acquiring the annotation position of the target annotation data set in the CAD image.
306. And determining the target video corresponding to the marking position based on the mapping relation between the preset marking position and the preset video.
307. And pushing the target video, wherein the target video is used for demonstrating a correct marking method.
Optionally, the detailed description of the steps 301 to 307 may refer to the corresponding steps from step 101 to step 104 of the presentation labeling method described in fig. 1, and is not repeated herein.
The display labeling method provided by the embodiment of the application is applied to electronic equipment, an annotated data set and a non-annotated data set in a first image can be determined, a CAD image is generated based on the annotated data set and the non-annotated data set, the CAD image comprises a target annotated data set, the target annotated data set is annotated data after annotation is completed, the reasonability of the target annotated data set is judged, if the target annotated data set is reasonable, the CAD image is displayed, if the target annotated data set is unreasonable, the annotated position of the target annotated data set in the CAD image is obtained, a target video corresponding to the annotated position is determined based on the mapping relation between the preset annotated position and the preset video, and the target video is pushed and used for demonstrating a correct annotation method.
In accordance with the above, the following is a device for implementing the above display labeling method, specifically as follows:
please refer to fig. 4A, which is a schematic structural diagram of an embodiment of a display labeling apparatus according to an embodiment of the present application. The display labeling apparatus described in this embodiment is applied to an electronic device, and includes: the determining unit 401, the generating unit 402, the judging unit 403 and the displaying unit 404 are as follows:
the determining unit 401 is configured to determine an annotated data set and a non-annotated data set in the first image;
the generating unit 402 is configured to generate a CAD image based on the annotated data set and the non-annotated data set, where the CAD image includes a target annotated data set, and the target annotated data set is annotated data after annotation is completed;
the judging unit 403 is configured to judge the rationality of the target annotation data set;
the display unit 404 is configured to display the CAD image if the target annotation data set is reasonable. The determining unit 401 may be configured to implement the method described in the step 101, the generating unit 402 may be configured to implement the method described in the step 102, the determining unit 403 may be configured to implement the method described in the step 103, the displaying unit 404 may be configured to implement the method described in the step 104, and so on.
It can be seen that, by the display labeling apparatus described in the embodiment of the present application, the labeled data set and the non-labeled data set in the first image can be determined, and based on the labeled data set and the non-labeled data set, a CAD image is generated, where the CAD image includes a target labeled data set, the target labeled data set is labeled data after labeling is completed, and the reasonability of the target labeled data set is determined.
In one possible example, in terms of determining the annotated data set and the non-annotated data set in the first image, the determining unit 401 is specifically configured to:
scanning the first image to obtain a second image;
performing image segmentation on the second image to obtain a plurality of labeled area images and a plurality of non-labeled area images;
determining a plurality of labeling information corresponding to the plurality of labeling area images, wherein each labeling area corresponds to one labeling information;
determining a plurality of non-labeling information corresponding to the plurality of non-labeling area images, wherein each non-labeling area corresponds to one piece of non-labeling information;
classifying the plurality of marked information and the plurality of non-marked information according to a preset mode to obtain a plurality of marked data corresponding to a plurality of first categories and a plurality of non-marked data corresponding to a plurality of second categories, forming the marked data into a marked data set, and forming the non-marked data into a non-marked data set.
In one possible example, in the aspect of generating a CAD image based on the annotated dataset and the non-annotated dataset, the generating unit 402 is specifically configured to:
on the basis of the non-labeled data set, matching a plurality of data formats corresponding to a plurality of non-labeled data corresponding to the non-labeled data set with a plurality of data formats in a preset CAD component set respectively to obtain a plurality of first matching degrees, wherein the preset CAD component set comprises a plurality of preset CAD components;
selecting a preset CAD component corresponding to the first matching degree exceeding a preset matching threshold value in the first matching degrees to obtain a plurality of target CAD components;
acquiring target data corresponding to the plurality of target CAD components, and generating target non-annotated images;
determining a plurality of pixel coordinates corresponding to a plurality of pixel points corresponding to each marked region image in the plurality of marked region images to obtain a plurality of pixel coordinate sets;
projecting the labeling data set into the target non-labeling image based on the plurality of pixel coordinate sets to obtain a plurality of target pixel coordinate sets;
and based on the multiple target pixel coordinate sets, marking the marked data set into the target non-marked image according to a preset marking mode to obtain the CAD image, wherein the CAD image comprises a target marked data set. In a possible example, in terms of determining the reasonableness of the target annotation data set, the determining unit 403 is specifically configured to:
inputting the target labeling data set into a preset neural network model to obtain a plurality of pieces of characteristic information corresponding to the target labeling data set and a plurality of second matching degrees corresponding to the plurality of pieces of characteristic information and a plurality of preset mark types, wherein each preset mark type corresponds to one second matching degree;
calculating the average value of the plurality of second matching degrees to obtain an average value;
if the average value is larger than or equal to a preset threshold value, determining that the target labeling data set is reasonable;
and if the average value is smaller than the preset threshold value, determining that the target annotation data set is unreasonable.
Please refer to fig. 4B, which is a schematic structural diagram of an embodiment of a display labeling apparatus according to an embodiment of the present disclosure. The display labeling apparatus described in this embodiment is applied to an electronic device, and the apparatus further includes: an acquisition unit 405 and a push unit 406, wherein,
the obtaining unit 405 is configured to obtain a labeling position of the target labeling data set in the CAD image if the target labeling data set is not reasonable;
the determining unit 401 is further configured to determine, based on a mapping relationship between a preset labeling position and a preset video, a target video corresponding to the labeling position;
the pushing unit 406 is configured to push the target video, where the target video is used to demonstrate a correct annotation method.
It can be understood that the functions of each program module of the presentation labeling apparatus in this embodiment can be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process thereof can refer to the relevant description of the foregoing method embodiment, which is not described herein again.
In accordance with the above, please refer to fig. 5, which is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present disclosure. The electronic device described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, e.g., a CPU; and a memory 4000, the input device 1000, the output device 2000, the processor 3000, and the memory 4000 being connected by a bus 5000.
The input device 1000 may be a touch panel, a physical button, or a mouse.
The output device 2000 may be a display screen.
The memory 4000 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 4000 is used for storing a set of program codes, and the input device 1000, the output device 2000 and the processor 3000 are used for calling the program codes stored in the memory 4000 to execute the following operations:
determining an annotated dataset and a non-annotated dataset in a first image;
generating a CAD image based on the annotated data set and the non-annotated data set, wherein the CAD image comprises a target annotated data set which is annotated data after annotation is completed;
judging the rationality of the target labeling data set;
and if the target labeling data set is reasonable, displaying the CAD image.
It can be seen that, through the electronic device described in the embodiment of the present application, the annotated data set and the non-annotated data set in the first image can be determined, and based on the annotated data set and the non-annotated data set, a CAD image is generated, where the CAD image includes a target annotated data set, the target annotated data set is annotated data after annotation is completed, the reasonability of the target annotated data set is determined, and if the target annotated data set is reasonable, the CAD image is displayed, so that synchronization of online and offline annotated data and non-annotated data can be realized, and user experience can be improved.
In one possible example, in the determining the annotated data set and the non-annotated data set in the first image, the processor 3000 is specifically configured to:
scanning the first image to obtain a second image;
performing image cutting on the second image to obtain a plurality of marked area images and a plurality of non-marked area images;
determining a plurality of labeling information corresponding to the plurality of labeling area images, wherein each labeling area corresponds to one labeling information;
determining a plurality of non-labeling information corresponding to the plurality of non-labeling area images, wherein each non-labeling area corresponds to one piece of non-labeling information;
classifying the plurality of marked information and the plurality of non-marked information according to a preset mode to obtain a plurality of marked data corresponding to a plurality of first categories and a plurality of non-marked data corresponding to a plurality of second categories, forming the marked data into a marked data set, and forming the non-marked data into a non-marked data set.
In one possible example, in the generating a CAD image based on the annotated data set and the non-annotated data set, the processor 3000 is specifically configured to:
on the basis of the non-labeled data set, matching a plurality of data formats corresponding to a plurality of non-labeled data corresponding to the non-labeled data set with a plurality of data formats in a preset CAD component set respectively to obtain a plurality of first matching degrees, wherein the preset CAD component set comprises a plurality of preset CAD components;
selecting a preset CAD component corresponding to the first matching degree exceeding a preset matching threshold value in the first matching degrees to obtain a plurality of target CAD components;
acquiring target data corresponding to the plurality of target CAD components, and generating target non-annotated images;
determining a plurality of pixel coordinates corresponding to a plurality of pixel points corresponding to each of the plurality of labeled area images to obtain a plurality of pixel coordinate sets;
projecting the labeling data set into the target non-labeling image based on the plurality of pixel coordinate sets to obtain a plurality of target pixel coordinate sets;
and based on the multiple target pixel coordinate sets, marking the marked data set into the target non-marked image according to a preset marking mode to obtain the CAD image, wherein the CAD image comprises a target marked data set. In one possible example, in determining the reasonableness of the target annotation data set, the processor 3000 is specifically configured to:
inputting the target labeling data set into a preset neural network model to obtain a plurality of pieces of characteristic information corresponding to the target labeling data set and a plurality of second matching degrees corresponding to the plurality of pieces of characteristic information and a plurality of preset mark types, wherein each preset mark type corresponds to one second matching degree;
calculating the average value of the plurality of second matching degrees to obtain an average value;
if the average value is larger than or equal to a preset threshold value, determining that the target labeling data set is reasonable;
and if the average value is smaller than the preset threshold value, determining that the target annotation data set is unreasonable.
In a possible example, after the determining the reasonableness of the target annotation data set, the processor 3000 is further configured to:
if the target annotation data set is unreasonable, acquiring an annotation position of the target annotation data set in the CAD image;
determining a target video corresponding to a preset labeling position based on a mapping relation between the preset labeling position and a preset video;
and pushing the target video, wherein the target video is used for demonstrating a correct marking method.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a program, and the program includes, when executed, some or all of the steps of any one of the display labeling methods described in the above method embodiments.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable presentation marking device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable presentation marking device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable display marking device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable display device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations may be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (8)

1. A display labeling method is applied to electronic equipment and comprises the following steps:
determining an annotated dataset and a non-annotated dataset in a first image;
generating a CAD image based on the annotated data set and the non-annotated data set, wherein the CAD image comprises a target annotated data set which is annotated data after annotation is completed;
judging the rationality of the target labeling data set;
if the target labeling data set is reasonable, displaying the CAD image;
wherein generating a CAD image based on the annotated dataset and the non-annotated dataset comprises: on the basis of the non-labeled data set, matching a plurality of data formats corresponding to a plurality of non-labeled data corresponding to the non-labeled data set with a plurality of data formats in a preset CAD component set respectively to obtain a plurality of first matching degrees, wherein the preset CAD component set comprises a plurality of preset CAD components; selecting a preset CAD component corresponding to the first matching degree exceeding a preset matching threshold value from the plurality of first matching degrees to obtain a plurality of target CAD components; acquiring target data corresponding to the plurality of target CAD components, and generating target non-annotated images; determining a plurality of pixel coordinates corresponding to a plurality of pixel points corresponding to each marked area image in a plurality of marked area images to obtain a plurality of pixel coordinate sets; projecting the annotation data set into the target non-annotation image based on the plurality of pixel coordinate sets to obtain a plurality of target pixel coordinate sets; and based on the multiple target pixel coordinate sets, marking the marked data set into the target non-marked image according to a preset marking mode to obtain the CAD image, wherein the CAD image comprises a target marked data set.
2. The method of claim 1, wherein determining the annotated dataset and the non-annotated dataset in the first image comprises:
scanning the first image to obtain a second image;
performing image segmentation on the second image to obtain a plurality of labeled area images and a plurality of non-labeled area images;
determining a plurality of labeling information corresponding to the plurality of labeling area images, wherein each labeling area corresponds to one labeling information;
determining a plurality of non-labeling information corresponding to the plurality of non-labeling area images, wherein each non-labeling area corresponds to one piece of non-labeling information;
classifying the plurality of marked information and the plurality of non-marked information according to a preset mode to obtain a plurality of marked data corresponding to a plurality of first categories and a plurality of non-marked data corresponding to a plurality of second categories, forming the marked data into a marked data set, and forming the non-marked data into a non-marked data set.
3. The method according to claim 1, wherein said determining the reasonableness of the target annotation data set comprises:
inputting the target labeling data set into a preset neural network model to obtain a plurality of pieces of characteristic information corresponding to the target labeling data set and a plurality of second matching degrees corresponding to the plurality of pieces of characteristic information and a plurality of preset mark types, wherein each preset mark type corresponds to one second matching degree;
calculating the average value of the plurality of second matching degrees to obtain an average value;
if the average value is larger than or equal to a preset threshold value, determining that the target labeling data set is reasonable;
and if the average value is smaller than the preset threshold value, determining that the target annotation data set is unreasonable.
4. The method according to claim 3, wherein after said determining the reasonableness of the target annotation data set, the method further comprises:
if the target annotation data set is unreasonable, acquiring an annotation position of the target annotation data set in the CAD image;
determining a target video corresponding to a preset labeling position based on a mapping relation between the preset labeling position and a preset video;
and pushing the target video, wherein the target video is used for demonstrating a correct marking method.
5. A display labeling device is applied to electronic equipment, and the device comprises: a determining unit, a generating unit, a judging unit and a displaying unit, wherein,
the determining unit is used for determining an annotated data set and a non-annotated data set in the first image;
the generating unit is configured to generate a CAD image based on the annotation dataset and the non-annotation dataset, where the CAD image includes a target annotation dataset, and the target annotation dataset is annotation data after annotation is completed, and the generating unit generates the CAD image based on the annotation dataset and the non-annotation dataset, and includes: on the basis of the non-labeled data set, matching a plurality of data formats corresponding to a plurality of non-labeled data corresponding to the non-labeled data set with a plurality of data formats in a preset CAD component set respectively to obtain a plurality of first matching degrees, wherein the preset CAD component set comprises a plurality of preset CAD components; selecting a preset CAD component corresponding to the first matching degree exceeding a preset matching threshold value from the plurality of first matching degrees to obtain a plurality of target CAD components; acquiring target data corresponding to the plurality of target CAD components, and generating target non-annotated images; determining a plurality of pixel coordinates corresponding to a plurality of pixel points corresponding to each marked region image in the plurality of marked region images to obtain a plurality of pixel coordinate sets; projecting the annotation data set into the target non-annotation image based on the plurality of pixel coordinate sets to obtain a plurality of target pixel coordinate sets; marking the marked data set into the target non-marked image according to a preset marking mode based on the plurality of target pixel coordinate sets to obtain the CAD image, wherein the CAD image comprises a target marked data set;
the judging unit is used for judging the rationality of the target labeling data set;
and the display unit is used for displaying the CAD image if the target labeling data set is reasonable.
6. The apparatus according to claim 5, wherein, in said determining the annotated dataset and the non-annotated dataset in the first image, the determining unit is specifically configured to:
scanning the first image to obtain a second image;
performing image segmentation on the second image to obtain a plurality of labeled area images and a plurality of non-labeled area images;
determining a plurality of labeling information corresponding to the plurality of labeling area images, wherein each labeling area corresponds to one labeling information;
determining a plurality of non-annotation information corresponding to the plurality of non-annotation area images, wherein each non-annotation area corresponds to one piece of non-annotation information;
classifying the plurality of labeled information and the plurality of non-labeled information according to a preset mode to obtain a plurality of labeled data corresponding to a plurality of first categories and a plurality of non-labeled data corresponding to a plurality of second categories, forming the labeled data set by the plurality of labeled data, and forming the non-labeled data set by the plurality of non-labeled data.
7. An electronic device, comprising a processor, a memory to store one or more programs and configured to be executed by the processor, the programs including instructions for performing the steps in the method of any of claims 1-4.
8. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-4.
CN201911269065.5A 2019-12-11 2019-12-11 Display labeling method and related product Active CN111143912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911269065.5A CN111143912B (en) 2019-12-11 2019-12-11 Display labeling method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911269065.5A CN111143912B (en) 2019-12-11 2019-12-11 Display labeling method and related product

Publications (2)

Publication Number Publication Date
CN111143912A CN111143912A (en) 2020-05-12
CN111143912B true CN111143912B (en) 2023-04-07

Family

ID=70518030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911269065.5A Active CN111143912B (en) 2019-12-11 2019-12-11 Display labeling method and related product

Country Status (1)

Country Link
CN (1) CN111143912B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831791B (en) * 2020-06-29 2024-03-22 深圳市万翼数字技术有限公司 Drawing display method, electronic equipment and graphic server
CN111832093B (en) * 2020-06-29 2024-05-28 深圳市万翼数字技术有限公司 Electronic drawing processing method, electronic equipment and graphic server

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1759399A (en) * 2003-03-11 2006-04-12 美国西门子医疗解决公司 Computer-aided detection systems and methods for ensuring manual review of computer marks in medical images
CN108764372A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Construction method and device, mobile terminal, the readable storage medium storing program for executing of data set
CN109190209A (en) * 2018-08-17 2019-01-11 长沙恩为软件有限公司 A kind of weak rigidity method and system based on BIM model
CN110378842A (en) * 2019-07-25 2019-10-25 厦门大学 A kind of image texture filtering method, terminal device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1759399A (en) * 2003-03-11 2006-04-12 美国西门子医疗解决公司 Computer-aided detection systems and methods for ensuring manual review of computer marks in medical images
CN108764372A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Construction method and device, mobile terminal, the readable storage medium storing program for executing of data set
CN109190209A (en) * 2018-08-17 2019-01-11 长沙恩为软件有限公司 A kind of weak rigidity method and system based on BIM model
CN110378842A (en) * 2019-07-25 2019-10-25 厦门大学 A kind of image texture filtering method, terminal device and storage medium

Also Published As

Publication number Publication date
CN111143912A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN109801347B (en) Method, device, equipment and medium for generating editable image template
CN109685870B (en) Information labeling method and device, labeling equipment and storage medium
CN111143912B (en) Display labeling method and related product
CN110675940A (en) Pathological image labeling method and device, computer equipment and storage medium
CN111240669B (en) Interface generation method and device, electronic equipment and computer storage medium
CN110442519B (en) Crash file processing method and device, electronic equipment and storage medium
WO2021147219A1 (en) Image-based text recognition method and apparatus, electronic device, and storage medium
CN110751149A (en) Target object labeling method and device, computer equipment and storage medium
CN111208998A (en) Method and device for automatically laying out data visualization large screen and storage medium
CN109656652B (en) Webpage chart drawing method, device, computer equipment and storage medium
US11481577B2 (en) Machine learning (ML) quality assurance for data curation
CN107146098B (en) Advertisement operation configuration method and equipment
CN112541240A (en) Part drawing method, computer device and storage medium
CN110633251B (en) File conversion method and equipment
CN115878935B (en) Method, system, device, equipment and medium for partial refreshing of chart
CN114724170A (en) BOM generation method and device, electronic equipment and storage medium
CN113672143B (en) Image labeling method, system, device and storage medium
CN112487774B (en) Writing form electronization method and device and electronic equipment
WO2022105120A1 (en) Text detection method and apparatus from image, computer device and storage medium
EP4207745A1 (en) Method for embedding image in video, and method and apparatus for acquiring planar prediction model
CN113704650A (en) Information display method, device, system, equipment and storage medium
CN114443022A (en) Method for generating page building block and electronic equipment
CN114663418A (en) Image processing method and device, storage medium and electronic equipment
CN112732100A (en) Information processing method and device and electronic equipment
CN111582143A (en) Student classroom attendance method and device based on image recognition and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230625

Address after: A601, Zhongke Naneng Building, No. 06 Yuexing 6th Road, Gaoxin District Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518063

Patentee after: Shenzhen Wanyi Digital Technology Co.,Ltd.

Address before: 519000 room 105-24914, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province (centralized office area)

Patentee before: WANYI TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right