CN108876934B - Key point marking method, device and system and storage medium - Google Patents

Key point marking method, device and system and storage medium Download PDF

Info

Publication number
CN108876934B
CN108876934B CN201711384795.0A CN201711384795A CN108876934B CN 108876934 B CN108876934 B CN 108876934B CN 201711384795 A CN201711384795 A CN 201711384795A CN 108876934 B CN108876934 B CN 108876934B
Authority
CN
China
Prior art keywords
model
display
labeled
annotated
keypoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711384795.0A
Other languages
Chinese (zh)
Other versions
CN108876934A (en
Inventor
李悦
马里千
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou kuangyun Jinzhi Technology Co., Ltd
Beijing Kuangshi Technology Co Ltd
Original Assignee
Hangzhou Kuangyun Jinzhi Technology Co ltd
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Kuangyun Jinzhi Technology Co ltd, Beijing Kuangshi Technology Co Ltd filed Critical Hangzhou Kuangyun Jinzhi Technology Co ltd
Priority to CN201711384795.0A priority Critical patent/CN108876934B/en
Publication of CN108876934A publication Critical patent/CN108876934A/en
Application granted granted Critical
Publication of CN108876934B publication Critical patent/CN108876934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method, a device and a system for marking key points and a storage medium. The method comprises the following steps: controlling a display device to display a model to be marked and a standard reference model, wherein M identifiers of reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1; acquiring the positions of M target key points determined by a user on a model to be marked according to the M reference key points; and controlling the display device to display the identifications of the M target key points on the model to be labeled based on the positions of the M target key points. According to the key point labeling method, device and system and the storage medium, the reference key points of the standard reference model are used as the reference objects of the labeling positions, so that a user can be guided to label the model to be labeled very conveniently, and efficient, convenient and robust key point labeling is realized.

Description

Key point marking method, device and system and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a method, a device and a system for marking key points and a storage medium.
Background
At present, three-dimensional technology is continuously developed and widely applied in many fields. When a three-dimensional model (such as a three-dimensional human face model) is researched and developed in engineering, accurate key point position information is needed. The position information of the key points often needs manual marking. In the manual labeling process, because the labeling personnel do not intuitively grasp the key points to be labeled, if the number of the three-dimensional face models to be labeled is large, the labeling speed is slow, and the problems of wrong labeling or label missing easily occur. At present, no simple, efficient and robust application can assist a labeling person to complete labeling of batch data of the three-dimensional model.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides a method, a device and a system for marking key points and a storage medium.
According to an aspect of the present invention, a method of keypoint annotation is provided. The method comprises the following steps: controlling a display device to display a model to be marked and a standard reference model, wherein M identifiers of reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1; acquiring the positions of M target key points determined by a user on a model to be marked according to the M reference key points; and controlling the display device to display the identifications of the M target key points on the model to be labeled based on the positions of the M target key points.
Illustratively, before controlling the display device to display the model to be annotated and the standard reference model, the method further comprises: and determining M reference key points based on the current labeling state of the model to be labeled.
Illustratively, the standard reference model also displays the identifications of other reference key points, and adopts a point or a sphere of a first color as the identifications of the M reference key points and adopts a point or a sphere of a second color as the identifications of the other reference key points.
Illustratively, the model to be annotated also displays an identifier of at least one target key point which has been annotated before, and before controlling the display device to display the model to be annotated and the standard reference model, the method further comprises: and determining at least one target key point at least based on the current labeling state of the model to be labeled.
Exemplarily, the determining of the at least one target keypoint based on at least the current annotation state of the model to be annotated comprises: and determining at least one target key point based on the current labeling state of the model to be labeled and the current display angle of the model to be labeled.
Illustratively, the method further comprises: receiving an adjusting instruction input by a user for a first model, wherein the first model is one of a model to be annotated and a standard reference model, and the adjusting instruction comprises an instruction for indicating one or more operations of rotation, translation and zooming; and in response to the adjustment instruction, performing the corresponding operation indicated by the adjustment instruction on the first model.
Illustratively, the method further comprises: and in response to the adjusting instruction, performing operation consistent with the first model on a second model, wherein the second model is one of the model to be annotated and the standard reference model, and the second model is different from the first model.
Illustratively, the method further comprises: detecting the position of a cursor in real time; and if the cursor is positioned on the model to be marked, controlling the display device to display the set identifier at the intersection point of the cursor and the model to be marked in real time.
Illustratively, the position of each target keypoint of the M target keypoints is obtained by: receiving a marking instruction which is input by a user and aims at a model to be marked; and responding to the marking instruction, and determining that the current position of the intersection point of the cursor and the model to be marked is the position of one of the M target key points.
Illustratively, the method further comprises: receiving a mark cancellation instruction which is input by a user and aims at a model to be marked; and responding to the annotation canceling instruction, and controlling the display device to delete the identification of the target key point which is annotated last time in the M target key points on the model to be annotated.
Illustratively, the method further comprises: determining a next reference key point behind the M reference key points based on the current labeling state of the model to be labeled; and controlling the display device to display the identifier of the next reference key point on the standard reference model in a first mode and to display the identifiers of the other reference key points in a second mode.
Illustratively, the method further comprises: receiving a labeling skipping instruction which is input by a user and aims at a model to be labeled; and in response to the annotation skipping instruction, controlling the display device to modify the identifier of a first reference keypoint currently displayed in the first mode on the standard reference model to be displayed in the second mode, and to display the identifier of a second reference keypoint located after the first reference keypoint in the first mode on the standard reference model, the second reference keypoint being spaced from the first reference keypoint by a predetermined number of reference keypoints; or, in response to the instruction of skipping the annotation, controlling the display device to switch the currently displayed model to be annotated to another model to be annotated for display.
Illustratively, the method further comprises: when the labeling of all target key points of the model to be labeled at the current display angle is completed, rotating one or both of the model to be labeled and the standard model to the next display angle; and/or when the labeling of all target key points of the model to be labeled at the current display angle is completed, screenshot is carried out on the model to be labeled at the current display angle and the intercepted image is stored
Illustratively, the method further comprises: and when the preset condition appears in the label of the model to be labeled, outputting corresponding user prompt information based on the preset condition.
Illustratively, the predetermined conditions include: receiving a labeling skipping instruction aiming at the model to be labeled, and determining to switch the currently displayed model to be labeled into the next model to be labeled based on the labeling skipping instruction; the user prompt information comprises information for prompting to skip the currently displayed model to be marked; and/or the predetermined conditions include: when the received adjustment instruction for the model to be annotated indicates that the model to be annotated is rotated to the next display angle, and the annotation of the target key point of the model to be annotated at the current display angle is not completed; the user prompt message comprises a message for prompting that the annotation is not completed.
Illustratively, controlling the display device to display the model to be annotated and the standard reference model comprises: and controlling the display device to display the model to be annotated and the standard reference model in the first display window and the second display window respectively.
According to another aspect of the present invention, there is provided a keypoint labeling apparatus, comprising modules for performing corresponding steps in the above-mentioned keypoint labeling method. Illustratively, the key point labeling apparatus includes: the first display control module is used for controlling the display device to display the model to be marked and the standard reference model, wherein M identifiers of the reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1; the position acquisition module is used for acquiring the positions of M target key points determined by a user on the model to be marked according to the M reference key points; and the second display control module is used for controlling the display device to display the identifications of the M target key points on the model to be labeled based on the positions of the M target key points.
According to another aspect of the present invention, there is provided a keypoint tagging system comprising a processor and a memory, wherein the memory has stored therein computer program instructions for executing the above keypoint tagging method when executed by the processor.
According to another aspect of the present invention, there is provided a storage medium having stored thereon program instructions for performing the above-described method of keypoint annotation when executed.
According to the key point labeling method, device and system and the storage medium, the reference key points of the standard reference model are used as the reference objects of the labeling positions, and a user can be guided to label the model to be labeled very conveniently, so that efficient, convenient and robust key point labeling is realized, and the problems of wrong labeling and label missing are reduced.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 illustrates a schematic block diagram of an example electronic device for implementing a keypoint annotation method and apparatus in accordance with embodiments of the invention;
FIG. 2 shows a schematic flow diagram of a keypoint labeling method according to one embodiment of the invention;
3a-3d show schematic diagrams of a standard reference model and identification of reference keypoints thereon according to an embodiment of the invention;
FIG. 4 illustrates a flow diagram of a method of keypoint annotation, according to one embodiment of the invention;
FIG. 5 shows a schematic block diagram of a keypoint tagging apparatus according to one embodiment of the invention; and
FIG. 6 shows a schematic block diagram of a keypoint tagging system according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In order to solve the above problem, embodiments of the present invention provide a method, an apparatus, and a system for labeling a keypoint, and a storage medium. The key point labeling method provided by the embodiment of the invention can assist the labeling personnel (namely users) to realize efficient and robust model labeling work. The method for marking the key points can be applied to various fields needing to mark the model, such as the fields of human face marking and recognition.
First, an example electronic device 100 for implementing the keypoint annotation method and apparatus according to an embodiment of the invention is described with reference to fig. 1.
As shown in fig. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104. Optionally, the electronic device 100 may also include an input device 106, an output device 108, and a model information acquisition device 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU), a Graphics Processor (GPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images and/or sounds) to an outside (e.g., a user), and the output device 108 may include a display. Optionally, the output device may also include a speaker or the like. Alternatively, the input device 106 and the output device 108 may be integrated together, implemented using the same interactive device (e.g., a touch screen).
The model information acquiring means 110 may acquire various information related to the model (including model information of the model to be annotated, model information of the standard reference model, keypoint information of the standard reference model, and the like), and store the acquired information in the storage means 104 for use by other components. Alternatively, the model information acquiring means 110 may be an image capturing means such as a camera, or the like. The model information of the model to be annotated may include an image acquired by the image acquisition device, and the model to be annotated may be generated based on the image acquired by the image acquisition device.
Exemplary electronic devices for implementing the method and apparatus for keypoint annotation according to embodiments of the present invention may be implemented on devices such as personal computers or remote servers, for example.
Hereinafter, a key point labeling method according to an embodiment of the present invention will be described with reference to fig. 2. FIG. 2 shows a schematic flow diagram of a keypoint labeling method 200, according to one embodiment of the invention. As shown in fig. 2, the keypoint labeling method 200 includes the following steps S210, S220, and S230.
In step S210, the display device is controlled to display the model to be labeled and a standard reference model, where M identifiers of the reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1.
For example, the display device may be various displays such as a liquid crystal display, an organic light emitting display, a Cathode Ray Tube (CRT) display, and the like.
The model information of the model to be labeled, the model information of the standard reference model and the key point information of the standard reference model can be obtained firstly. Subsequently, a model to be labeled may be generated based on model information of the model to be labeled, a standard reference model may be generated based on model information of the standard reference model, and reference keypoints may be generated on the standard reference model based on keypoint information of the standard reference model. The model information of the standard reference model and the key point information of the standard reference model are optional, and the standard reference model may be generated based on preset default information and the position of each reference key point may be determined.
For example, modeling may be performed according to model information of a model to be labeled, and an initial model skeleton (unrendered model) of the model to be labeled is generated. Illustratively, the model information of the model to be annotated may include basic information of the model to be annotated, such as size, number of vertices, vertex coordinates, and the like of the model to be annotated. As described above, the model information of the model to be annotated may include images acquired by the image acquisition device, such as a plurality of face images including target faces with different poses (i.e. different face orientations). And performing three-dimensional modeling based on the plurality of face images to obtain a three-dimensional face model (namely the model to be labeled). The three-dimensional face models built for different target faces may be different. Illustratively, the model information of the model to be labeled may further include illumination information, material information, and other information related to the rendering operation of the model to be labeled. Under the condition that the model to be labeled is a face model, the material information of the model to be labeled can be obtained based on the initially obtained face image, and face skin information and the like can be extracted from the face image to serve as the material information. Subsequently, the initial model skeleton of the model to be annotated can be rendered, including rendering of material and illumination. And the rendering result is the model to be labeled. The model to be annotated may then be output for display on a display device after rendering.
Similarly, the model information of the standard reference model may include basic information of the standard reference model, such as information of the size, the number of vertices, the coordinates of vertices, and the like of the standard reference model. The size of the standard reference model can be set according to needs, for example, the size of the standard reference model can be set to be not much different from the size of the model to be labeled. Modeling can be carried out according to the model information of the standard reference model, and an initial model skeleton of the standard reference model is generated. For example, the model information of the standard reference model may further include information related to the rendering operation, such as illumination information of the standard reference model. Subsequently, an initial model skeleton of the standard reference model may be rendered, including a rendering of lighting. The result of the rendering is a standard reference model. The standard reference model may then be output after rendering for display on a display device. Illustratively, the model information of the standard reference model may further include material information. The material can be set for the standard reference model, and rendering is carried out according to the preset material. The standard reference model may not be rendered or rendered with system default textures.
The keypoint information of the standard reference model comprises position information of the reference keypoints of the standard reference model. When the standard reference model is rendered, the positions of the reference key points can be rendered in a preset mode at the same time, so that when the standard reference model is displayed on the display device, the identification of each reference key point is displayed at the position of the reference key point. The identification may include, but is not limited to, a point or sphere of a particular color and/or size, for example, marked with a red dot at the location of the reference keypoint, i.e., the identification of the reference keypoint. For example, the identification of reference keypoints may also be points of other shapes or solid structures.
Exemplarily, step S210 may include: and controlling the display device to display the model to be annotated and the standard reference model in the first display window and the second display window respectively. Illustratively, the first display window and the second display window are capable of independent operation.
In one example, two regions, for example, the first display window and the second display window, may be divided on the same display interface to display the model to be annotated and the standard reference model, respectively. The first display window and the second display window can be respectively and independently zoomed in, zoomed out, fullscreened, minimized, panned and the like. Optionally, one of the first display window and the second display window may follow the other to perform respective operations of zooming in, zooming out, fullscreening, minimizing, panning, and the like. In another example, the model to be annotated and the standard reference model may be displayed on two display devices, respectively. And synchronously displaying the model to be labeled and the standard reference model so as to facilitate a user to label the model to be labeled by referring to the key point information shown on the standard reference model.
In the case where the model to be annotated and the standard reference model are face models, the keypoints described herein may include face keypoints, for example, the face keypoints may include keypoints of face contours, eyes, eyebrows, lips, nose contours, and chin, etc. In the case where the model to be annotated and the standard reference model are head models, the key points described herein may include face key points and ear key points.
The standard reference model has at least one reference keypoint, the position of which is preset. And the user marks the target key points at the corresponding positions of the model to be marked according to the reference key points. The M reference keypoints described herein may be part or all of the reference keypoints of a standard reference model. For example, the M reference key points may be the reference key points that the user needs to label correspondingly next.
FIG. 3a shows a schematic diagram of a standard reference model and identification of reference keypoints thereon, according to one embodiment of the invention. As shown in fig. 3a, on the standard reference model, several key points (i.e. reference key points) are marked with points having a specific color (color not shown in the figure). The user may look at the identifier of the reference key point on the standard reference model shown in fig. 3a, determine the position of the target key point on the model to be labeled according to the identifier of the reference key point, and label the corresponding target key point on the model to be labeled.
In one example, the standard reference model has only one display angle (or viewing angle), or is capable of displaying the identifications of all the reference keypoints of the standard reference model at the current display angle, in which case the display device may be controlled to display the identifications of all the reference keypoints at the current display angle of the standard reference model. In another example, the standard reference model has a plurality of display angles and only the identification of a part of the reference keypoints can be displayed at each display angle, in which case the display means may be controlled to display the identification of all the reference keypoints at the current display angle on the standard reference model. In yet another example, only the identification of the reference keypoint corresponding to the next keypoint to be annotated of the model to be annotated may be displayed at a time. The identity of the reference keypoints displayed on the standard reference model may be updated each time a user marks a target keypoint.
In step S220, the positions of M target key points determined by the user on the model to be labeled according to the M reference key points are obtained.
The user can refer to the indication of the standard reference model to find a suitable annotation position on the model to be annotated, and interact with the electronic device 100 through an interaction device such as a mouse, a keyboard, a touch screen, and the like, and electronically, for example, click a certain position (e.g., a left eye pupil position) on the model to be annotated on a display screen with the mouse. The electronic device 100 receives the click operation of the user and determines that the position clicked by the user is the position of a target key point. In addition, based on the information of the corresponding reference key point when the user marks, or based on the coordinates of the mouse click position, it can be known that the user marks the left-eye pupil key point. The name of the labeled target keypoint (e.g., as "left eye pupil") and the coordinates may then be recorded. By repeatedly executing the labeling operation at different positions of the model to be labeled, the position information of each target key point of the model to be labeled can be obtained.
In step S230, based on the positions of the M target key points, the display device is controlled to display the identifications of the M target key points on the model to be labeled.
Based on the location information of the M target keypoints, the identities of the M target keypoints may be displayed at corresponding locations for viewing by a user. Similar to the identification of the reference keypoints, the identification of the target keypoints may include, but is not limited to, a point or sphere having a particular color.
According to the key point labeling method provided by the embodiment of the invention, the reference key points of the standard reference model are used as the reference object of the labeling position, so that a user can be very conveniently guided to label the model to be labeled, the efficient, convenient and robust key point labeling is realized, and the problems of wrong labeling and label missing are reduced.
Illustratively, the keypoint labeling method according to the embodiments of the present invention can be implemented in a device, apparatus or system having a memory and a processor.
The keypoint labeling method according to the embodiment of the invention can be deployed at personal terminals such as smart phones, tablet computers, personal computers and the like.
Alternatively, the method for labeling the key points according to the embodiment of the present invention may also be distributively deployed at the server side and the client side. For example, information related to the model to be annotated and/or the standard reference model may be acquired at the client (for example, a face image is acquired at the image acquisition end), and the client transmits the acquired information to the server (or the cloud end), so that the server (or the cloud end) performs the key point annotation.
According to an embodiment of the present invention, before step S210, the method 200 for keyword annotation may include: and determining M reference key points based on the current labeling state of the model to be labeled.
The current labeling state, that is, the current labeling progress, refers to which target key points have been labeled currently.
Exemplarily, step S210 may include: the display device is controlled to display the identities of the M reference keypoints and the identities of the other reference keypoints in a distinguishable manner on the standard reference model.
In one example, only the identities of the reference keypoints (e.g., M reference keypoints) corresponding to the keypoints to be annotated subsequent to the model to be annotated may be displayed on the standard reference model, while the identities of the other reference keypoints are not displayed. In this case, the reference keypoints corresponding to the subsequent keypoints to be annotated can be rendered in any mode. In this way, the display condition of the identifier of the reference key point needs to be continuously updated along with the change of the annotation progress. By adopting the method, the data size involved in calculation and rendering is small, and the data processing speed is high.
In another example, the standard reference model also has an identification of other reference keypoints displayed thereon. For example, the M reference keypoints and the other reference keypoints displayed on the standard reference model may constitute all reference keypoints of the standard reference model at the current display angle. For example, the identities of the M reference keypoints may be displayed in a first mode and the identities of the other reference keypoints may be displayed in a second mode, the first mode being different from the second mode, such that the M reference keypoints may be distinguished from the other reference keypoints. For example, a point or sphere of a first color is used as the identification of the M reference keypoints, and a point or sphere of a second color is used as the identification of the other reference keypoints. Optionally, the size of the identifiers of the M reference keypoints is different from the size of the identifiers of the other reference keypoints, for example, the size of the identifiers of the M reference keypoints is larger than the size of the other reference keypoints, so that the difference between the two may be further increased. For example, the reference keypoints corresponding to the next keypoint to be annotated and the other reference keypoints may be shown in different colors and sizes, respectively, on the standard reference model.
For example, as shown in fig. 3a, in an initial stage, reference keypoints at the center of the eyebrow may be represented by points of a first color (e.g., green), and other reference keypoints may be represented by points of a second color (e.g., red). In fig. 3a, the dots of the first color are represented by dots of relatively large area and relatively dark color.
As shown in fig. 3a, in the initial stage, when the user has not performed any annotation, the identification of the reference keypoint at the eyebrow center is displayed on the standard reference model in a different color from the identification of the other reference keypoints to indicate that the user performed annotation starting from the eyebrow center keypoint. And then, when the user finishes the operation of marking the key points of the eyebrow center on the model to be marked through an interactive device such as a mouse, a keyboard, a touch screen and the like, the processor determines that the user finishes the marking operation through the signals received by the interactive device. The processor may then render the eyebrow keypoints as a second color (e.g., red) and render the second reference keypoints as the first color (e.g., green) the next time the standard reference model is rendered. The second reference keypoint is, for example, the leftmost point on the left eyebrow, which can be rendered green. Subsequently, the next time the display device is refreshed, the new rendering results can be displayed. And the user sees the new display content, and can continue to label the leftmost point on the left eyebrow on the model to be labeled according to the indication of the new display content. For example, the display order of the reference key points on the standard reference model may be preset, that is, the labeling order may be preset.
3b-3d show schematic diagrams of the standard reference model shown in FIG. 3a and identification of reference keypoints thereon at three other display angles, according to embodiments of the invention. 3b-3d, the initial reference keypoints, represented by the first color points, change when the standard reference model is at different display angles.
The mark of the next reference key point is continuously updated and displayed along with the current labeling state of the user, so that the current key point to be labeled of the user can be more clearly and timely prompted, the user can clearly master the target key point to be labeled subsequently, the workload of the user for checking the current labeling and the subsequent labeling conditions can be saved, and the labeling efficiency and the accuracy can be greatly improved.
According to the embodiment of the present invention, the model to be labeled in steps S210 and S230 may further display an identifier of at least one target keypoint that has been labeled before, and before step S210, the method 200 for labeling keypoints may further include: and determining the at least one target key point at least based on the current labeling state of the model to be labeled. That is to say, not only M target key points that have been labeled in the current time period but also at least one target key point that has been labeled before the current time period may be displayed on the model to be labeled, that is, part or all of the labeled target key points at the current time may be displayed on the model to be labeled.
In one example, the model to be labeled has only one display angle, or the identifiers of all labeled target key points can be displayed at the current display angle, in which case all labeled target key points can be displayed on the model to be labeled.
In another example, the model to be annotated has multiple display angles and only the identification of a part of the annotated target keypoints can be displayed at each display angle, in which case all the annotated keypoints at the current display angle can be displayed on the model to be annotated. In this example, determining at least one target keypoint based on at least the current annotation state of the model to be annotated may comprise: and determining at least one target key point based on the current labeling state of the model to be labeled and the current display angle of the model to be labeled.
Similar to the identification of the reference keypoints, the identification of the labeled target keypoints displayed on the model to be labeled may include, but is not limited to, a point or a sphere having a specific color and/or a specific size, for example, a red point is marked at the position of the target keypoint, and the red point is the identification of the target keypoint.
The processor 102 may render the model to be annotated and the standard reference model with a predetermined frequency. The predetermined frequency may be, for example, 20-30 frames per second. Each time the model to be annotated is rendered, the current annotation state of the model to be annotated (how many target keypoints have been annotated) may be determined first, and at least part of the annotated target keypoints may optionally be determined in combination with the current display angle. Each time the model to be annotated is rendered, at least part of the annotated target keypoints are rendered in a predetermined pattern (e.g. a specific color) so that the display screen can display an identification (e.g. a point or a sphere with a specific color) at the location where at least part of the annotated target keypoints are located.
The user is presented with the identification of at least one target key point, so that the user can conveniently confirm the progress of the labeling work and check the previous labeling work.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: receiving an adjusting instruction input by a user for a first model, wherein the first model is one of a model to be annotated and a standard reference model, and the adjusting instruction comprises an instruction for indicating one or more operations of rotation, translation and zooming; and in response to the adjustment instruction, performing the corresponding operation indicated by the adjustment instruction on the first model.
Illustratively, the keypoint tagging method 200 may further include: it is detected whether an adjustment instruction for the first model is received. Detecting whether an adjustment instruction for the first model is received may include: monitoring one or more of a mouse event, a keyboard event, a touch screen event, and a gesture event; determining whether the adjustment instruction is received based on the monitored event.
For example, the user's operation on one or more of a mouse, a keyboard, and a touch screen may be monitored in real time, and when the user adjusts the model to be annotated through one or more of the mouse, the keyboard, and the touch screen, for example, by dragging the mouse to rotate the model to be annotated by a certain angle, the rotation operation may be performed on the model to be annotated accordingly.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: and in response to the adjusting instruction, performing operation consistent with the first model on a second model, wherein the second model is one of the model to be annotated and the standard reference model, and the second model is different from the first model.
For example, when the user rotates the model to be annotated by a certain angle by dragging the mouse, the processor 102 may automatically adjust the standard reference model so that the standard reference model is also rotated by the same angle. When a user manually adjusts one of the model to be annotated and the standard reference model, the processor can automatically adjust the other one so that the other one can also execute consistent operation, and further the standard reference model and the model to be annotated can always keep the matched size, angle and the like. Therefore, manual adjustment of a user is not needed, the standard reference model and the model to be labeled can be actively adjusted along with the other side, the model consistency is high, the user can check and label the model, and the user experience is good.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: detecting the position of a cursor in real time; and if the cursor is positioned on the model to be marked, controlling the display device to display the set identifier at the intersection point of the cursor and the model to be marked in real time.
In one example, when a user operates a mouse to move a cursor on a display screen, a small ball with a specific color at an intersection point of the cursor position and a model to be marked can be rendered in real time, so that the user can conveniently see the position of the current mouse. In the process of moving the mouse by the user, the intersection point of the cursor position and the model to be labeled can be regarded as a virtual key point, and the ball with the specific color can be regarded as the identifier (i.e. the set identifier) of the virtual key point. Preferably, the color of the virtual keypoints is different from the color of the labeled keypoints. The user can see that a small ball on the display screen moves with the mouse. When the cursor moves to the position of the key point to be marked, the user can click the left mouse button, click a confirmation button (such as an enter button) on the keyboard, or click a confirmation control on the touch screen, at this time, the small ball can be placed at the position of the key point, and the color of the small ball can be rendered into the color of the marked key point, so that the marking of the key point is completed.
In another example, the intersection point of the cursor position and the model to be labeled is not rendered in real time when the mouse moves, but when the user clicks the left button of the mouse, or clicks a confirmation button (such as an enter button) on the keyboard, or clicks a confirmation control on the touch screen, the intersection point of the cursor position and the model to be labeled is rendered to the color of the labeled key point, so that the labeling of the target key point at the intersection point position is completed. And rendering all the marked key points after marking while rendering the intersection point of the cursor position and the model to be marked.
According to the embodiment of the invention, the position of each target key point in the M target key points is obtained by the following method: receiving a marking instruction which is input by a user and aims at a model to be marked; and responding to the marking instruction, and determining that the current position of the intersection point of the cursor and the model to be marked is the position of one of the M target key points.
Illustratively, the keypoint tagging method 200 may further include: and detecting whether a labeling instruction for the model to be labeled is received. Detecting whether the annotation instruction for the model to be annotated is received can include: monitoring one or more of a mouse event, a keyboard event, a touch screen event, and a gesture event; and determining whether the annotation instruction is received or not according to the monitored event.
As described above, when the user clicks the left mouse button, or clicks a confirmation button (e.g., enter) on the keyboard, or clicks a confirmation control on the touch screen, it may be determined that the annotation instruction was received. At this time, an intersection point of the cursor and the model to be labeled may be regarded as one of the target key points, and a current position of the intersection point may be regarded as a position of the target key point. The intersection point of the cursor and the model to be labeled can be rendered as the color of the labeled key point.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: receiving a mark cancellation instruction which is input by a user and aims at a model to be marked; and responding to the annotation canceling instruction, and controlling the display device to delete the identification of the target key point which is annotated last time in the M target key points on the model to be annotated.
Illustratively, the keypoint tagging method 200 may further include: and detecting whether a label canceling instruction for the model to be labeled is received. The detecting whether a label canceling instruction for the model to be labeled is received can include: monitoring one or more of a mouse event, a keyboard event, a touch screen event, and a gesture event; and determining whether the annotation canceling instruction is received or not according to the monitored event.
For example, when the user clicks a right mouse button, or clicks a cancel button (e.g., backspace or delete button) on a keyboard, or clicks a cancel control on a touch screen, it may be determined that a annotation cancellation instruction is received. At this point, the last annotation result may be undone. The keypoint label information associated with the most recently labeled target keypoint may be deleted from memory. In addition, the processor may further render the most recently labeled target keypoint again at the next rendering, such that the display device no longer displays the identification of the most recently labeled target keypoint.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: determining a next reference key point behind the M reference key points based on the current labeling state of the model to be labeled; and controlling the display device to display the identifier of the next reference key point on the standard reference model in a first mode and to display the identifiers of the other reference key points in a second mode.
Either or both of the first mode and the second mode may include displaying with a dot or a sphere having a particular color. The second mode is different from one or more of the parameters such as color, size and the like of the mark corresponding to the first mode, so that the marks displayed by adopting the first mode and the second mode can be distinguished. Alternatively, the second mode may be to not display any logo.
The embodiment that the next reference key point and other reference key points are displayed in different colors has been described above with reference to fig. 3a to 3d, and those skilled in the art can understand the embodiment with reference to the above description, and will not be described here again.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: receiving a labeling skipping instruction which is input by a user and aims at a model to be labeled; and in response to the annotation skipping instruction, controlling the display device to modify the identifier of a first reference keypoint currently displayed in the first mode on the standard reference model to be displayed in the second mode, and to display the identifier of a second reference keypoint located after the first reference keypoint in the first mode on the standard reference model, the second reference keypoint being spaced from the first reference keypoint by a predetermined number of reference keypoints; or, in response to the instruction of skipping the annotation, controlling the display device to switch the currently displayed model to be annotated to another model to be annotated for display.
Illustratively, the keypoint tagging method 200 may further include: and detecting whether an annotation skipping instruction for the model to be annotated is received. Detecting whether an annotation skipping instruction for the model to be annotated is received may include: monitoring one or more of a mouse event, a keyboard event, a touch screen event, and a gesture event; and determining whether the annotation skipping instruction is received or not according to the monitored event.
Assume that the identifier of the reference keypoint currently corresponding to the next keypoint to be annotated (i.e. the first reference keypoint) is displayed in a first color and the identifiers of the other reference keypoints are displayed in a second color. For example, when the user double clicks the left mouse button, or clicks the space bar on the keyboard, or clicks the skip control on the touch screen, it may be determined that an annotation skip instruction is received. At this time, the labeling of a plurality of target key points can be skipped, and the labeling of the target key points behind the target key points can be started directly. For example, on the standard reference model, reference keypoints corresponding to target keypoints (subsequent to-be-labeled keypoints) after skipping a predetermined number of target keypoints may be rendered as points of a first color, and other reference keypoints may be rendered as points of a second color.
In one example, when an annotation skipping instruction is received, if no remaining key points to be annotated exist after a predetermined number of target key points are skipped, the model to be annotated can be automatically switched from the current model to the next model, and the annotation of the next model is restarted. In another example, when the annotation skipping instruction is received, the model to be annotated is directly switched from the current model to the next model, and the annotation on the next model is restarted.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: when the labeling of all target key points of the model to be labeled at the current display angle is completed, rotating one or both of the model to be labeled and the standard model to the next display angle; and/or when the labeling of all target key points of the model to be labeled at the current display angle is completed, screenshot is carried out on the model to be labeled at the current display angle, and the intercepted image is stored.
In one example, any one of the model to be annotated and the standard model may be automatically rotated. For example, after the user has completed the annotation at the display angle as shown in FIG. 3a, the standard reference model may be automatically rotated to the display angle as shown in FIG. 3 b. The standard reference model may be sequentially rotated to be displayed in the order of the display angles shown in fig. 3a-3d for the user to label one by one. The model to be annotated can be automatically rotated to the next display angle as the standard reference model is rotated. Of course, after the standard reference model is automatically rotated, the user may manually rotate the model to be annotated with reference to the rotation of the standard reference model.
In another example, any one of the model to be annotated and the standard model may be manually rotated by a user. As described above, the other of the model to be annotated and the standard model may rotate following the rotation of the former.
Optionally, after the annotation of the target key point at the current display angle is completed, the current annotation result may be saved in the screenshot for subsequent preview.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: and when the preset condition appears in the label of the model to be labeled, outputting corresponding user prompt information based on the preset condition.
The user prompt may include textual information and/or voice information.
In one example, the predetermined condition may include: receiving a labeling skipping instruction aiming at the model to be labeled, and determining to switch the currently displayed model to be labeled into the next model to be labeled based on the labeling skipping instruction; the user prompt information may include information for prompting to skip the currently displayed model to be annotated.
When the user inputs an annotation skipping instruction, if the model is determined to need to be switched, user prompt information can be output to remind the user that the current operation is the model switching operation. The user can further confirm whether the model is indeed to be switched. Thus, the user can be prevented from operating erroneously.
In another example, the predetermined condition may include: rotating the model to be annotated to the next display angle under the received instruction of adjusting the model to be annotated, wherein the annotation of the target key point of the model to be annotated under the current display angle is not completed; the user prompt message may include a message for prompting that the annotation is incomplete.
If the display angle is to be converted when the annotation at the current display angle is not completed, the user can be prompted that the annotation is not completed. In this way, the user can be prevented from missing a label.
According to the embodiment of the present invention, the method 200 for labeling a keypoint may further include: rendering and outputting the model to be annotated and the standard reference model by adopting a preset frequency to be displayed by a display device; when the model to be marked is rendered each time, determining the current position of the marked key point to be displayed and/or the intersection point of the cursor and the model to be marked on the model to be marked, rendering and outputting the mark of the marked key point to be displayed and/or the preset mark corresponding to the current position of the intersection point to be displayed so as to be displayed by a display device; and determining a reference keypoint to be displayed on the standard reference model each time the standard reference model is rendered, rendering and outputting an identification of the reference keypoint to be displayed for display by the display device.
As described above, the processor 102 may render the model to be annotated and the standard reference model with a predetermined frequency. When the model to be annotated and the standard reference model are rendered each time, the key points to be displayed on the model to be annotated and the standard reference model can be determined according to the current annotation state and the preset requirement, and corresponding rendering is executed, so that the display device can display the identification of the key points to be displayed. The present embodiment can be understood in conjunction with the above description about rendering of the model to be annotated and the standard reference model, which is not described herein again.
Fig. 4 is a flowchart illustrating a method for labeling a keypoint according to an embodiment of the present invention. As shown in fig. 4, model information of the model to be annotated may be obtained first, and optionally, model information and key point information of the standard reference model may be obtained. Subsequently, the model to be annotated and the standard reference model can be zoomed and moved to the visual angle center of each window, and information such as illumination and texture is added to respectively perform model rendering. Then, on the standard reference model, the initial reference keypoints may be rendered in a first color and the other reference keypoints may be rendered in a second color. It should be understood that the initial reference keypoint refers to the first reference keypoint of the reference keypoints arranged in a preset display order. Subsequently, the user's real-time mouse and keyboard operations may be detected, and the intersection of the cursor position and the model to be annotated (see description above) may be rendered in real-time. Subsequently, when a marking instruction of a user is received, the intersection point position is determined to be the position of the target key point, and the position information of the target key point is recorded. And finally, after all the target key points are labeled, the position information and other related information of each target key point can be output.
The method for labeling the key points can be applied to the key point labeling of various three-dimensional models, but the method is not limited to the three-dimensional models and can also be applied to the key point labeling of two-dimensional models (such as human face images).
According to another aspect of the present invention, a key point labeling apparatus is provided. FIG. 5 shows a schematic block diagram of a keypoint tagging apparatus 500 according to one embodiment of the invention.
As shown in fig. 5, the keypoint labeling apparatus 500 according to the embodiment of the invention includes a first display control module 510, a position acquisition module 520, and a second display control module 530. Optionally, the apparatus 500 may further comprise a display device. The respective modules may respectively perform the respective steps/functions of the keypoint labeling method described above in connection with fig. 2. Only the main functions of the components of the key point labeling apparatus 500 will be described below, and the details that have been described above will be omitted.
The first display control module 510 is configured to control the display device to display a model to be labeled and a standard reference model, where M identifiers of reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1. The first display control module 510 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The position obtaining module 520 is configured to obtain positions of M target key points determined by a user on a model to be labeled according to the M reference key points. The location acquisition module 520 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The second display control module 530 is configured to control the display device to display the identifiers of the M target key points on the model to be labeled based on the positions of the M target key points. The second display control module 530 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
Illustratively, the apparatus 500 further comprises: a first key point determining module, configured to determine M reference key points based on the current labeling status of the model to be labeled before the first display control module 510 controls the display apparatus to display the model to be labeled and the standard reference model.
Illustratively, the standard reference model also displays the identifications of other reference key points, and adopts a point or a sphere of a first color as the identifications of the M reference key points and adopts a point or a sphere of a second color as the identifications of the other reference key points.
Illustratively, the model to be labeled also displays an identifier of at least one target key point which has been labeled before, and the apparatus 500 further includes: and a second key point determining module, configured to determine at least one target key point based on at least the current labeling state of the model to be labeled before the first display control module 510 controls the display apparatus to display the model to be labeled and the standard reference model.
Illustratively, the second keypoint determination module is specifically configured to determine at least one target keypoint based on the current annotation state of the model to be annotated and the current display angle of the model to be annotated.
Illustratively, the apparatus 500 further comprises: the system comprises a receiving module, a judging module and a display module, wherein the receiving module is used for receiving an adjusting instruction which is input by a user and aims at a first model, the first model is one of a model to be annotated and a standard reference model, and the adjusting instruction comprises an instruction used for indicating one or more operations of rotation, translation and scaling; and the operation execution module is used for responding to the adjusting instruction and executing the corresponding operation indicated by the adjusting instruction on the first model.
The operation execution module is further used for responding to the adjusting instruction and executing the operation which is consistent with the first model on a second model, wherein the second model is one of the model to be labeled and the standard reference model, and the second model is different from the first model.
Illustratively, the apparatus 500 further comprises: the cursor detection module is used for detecting the position of a cursor in real time; and the third display control module is used for controlling the display device to display the set identifier at the intersection point of the cursor and the model to be marked in real time if the cursor is positioned on the model to be marked.
Illustratively, the location obtaining module 520 is specifically configured to: receiving a marking instruction which is input by a user and aims at a model to be marked; and responding to the marking instruction, and determining that the current position of the intersection point of the cursor and the model to be marked is the position of one of the M target key points.
Illustratively, the apparatus 500 further comprises: the cancellation instruction receiving module is used for receiving a marking cancellation instruction which is input by a user and aims at the model to be marked; and the fourth display control module is used for responding to the annotation canceling instruction and controlling the display device to delete the identification of the target key point which is annotated last time in the M target key points on the model to be annotated.
Illustratively, the apparatus 500 further comprises: the third key point determining module is used for determining a next reference key point behind the M reference key points based on the current labeling state of the model to be labeled; and a fifth display control module for controlling the display device to display the identifier of the next reference keypoint on the standard reference model in the first mode and to display the identifiers of the other reference keypoints in the second mode.
Illustratively, the apparatus 500 further comprises: a skip instruction receiving module and a sixth display control module. The skipping instruction receiving module is used for receiving a marking skipping instruction which is input by a user and aims at the model to be marked. The sixth display control module is configured to: in response to the annotation skipping instruction, controlling the display device to modify the identifier of a first reference key point currently displayed in the first mode on the standard reference model to be displayed in the second mode, and displaying the identifier of a second reference key point located behind the first reference key point on the standard reference model in the first mode, wherein the second reference key point and the first reference key point are spaced by a predetermined number of reference key points; or, in response to the instruction of skipping the annotation, controlling the display device to switch the currently displayed model to be annotated to another model to be annotated for display.
Illustratively, the apparatus 500 further comprises a rotation module and/or a screenshot module. The rotation module is used for rotating one or both of the model to be marked and the standard model to the next display angle when the marking of all the target key points of the model to be marked under the current display angle is completed. And the screenshot module is used for screenshot the model to be annotated at the current display angle and storing the intercepted image when the annotation of all the target key points of the model to be annotated at the current display angle is completed.
Illustratively, the apparatus 500 further comprises: and the prompting module is used for outputting corresponding user prompting information based on a predetermined condition when the predetermined condition occurs in the label of the model to be labeled.
Illustratively, the predetermined conditions include: receiving a labeling skipping instruction aiming at the model to be labeled, and determining to switch the currently displayed model to be labeled into the next model to be labeled based on the labeling skipping instruction; the user prompt information comprises information for prompting to skip the currently displayed model to be marked; and/or the predetermined conditions include: when the received adjustment instruction for the model to be annotated indicates that the model to be annotated is rotated to the next display angle, and the annotation of the target key point of the model to be annotated at the current display angle is not completed; the user prompt message comprises a message for prompting that the annotation is not completed.
Illustratively, the first display control module 510 is specifically configured to control the display device to display the model to be annotated and the standard reference model in the first display window and the second display window, respectively.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
FIG. 6 illustrates a schematic block diagram of a keypoint tagging system 600 in accordance with one embodiment of the present invention. The keypoint tagging system 600 includes a display device 610, a storage device 620, and a processor 630.
The display device 610 is used for displaying the model to be labeled, the reference model to be labeled and the identification of each key point.
The storage means 620 stores computer program instructions for implementing the respective steps in the keypoint labeling method according to an embodiment of the invention.
The processor 630 is configured to execute the computer program instructions stored in the storage device 620 to perform the corresponding steps of the keypoint labeling method according to the embodiment of the invention.
In one embodiment, the computer program instructions, when executed by the processor 630, are for performing the steps of: controlling a display device to display a model to be marked and a standard reference model, wherein M identifiers of reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1; acquiring the positions of M target key points determined by a user on a model to be marked according to the M reference key points; and controlling the display device to display the identifications of the M target key points on the model to be labeled based on the positions of the M target key points.
Illustratively, before the step of controlling a display device to display the model to be annotated and the standard reference model, which the computer program instructions are for execution by the processor 630 when executed, the computer program instructions are further for executing the steps of: and determining M reference key points based on the current labeling state of the model to be labeled.
Illustratively, the standard reference model also displays the identifications of other reference key points, and adopts a point or a sphere of a first color as the identifications of the M reference key points and adopts a point or a sphere of a second color as the identifications of the other reference key points.
Illustratively, the model to be annotated also has displayed thereon an identification of at least one target keypoint that has been previously annotated, the computer program instructions, when executed by the processor 630, are further operable, prior to the step of controlling a display device for displaying the model to be annotated and the standard reference model, for execution by the processor 630, to perform the steps of: and determining at least one target key point at least based on the current labeling state of the model to be labeled.
Illustratively, the step of determining at least one target keypoint based on at least the current annotation state of the model to be annotated, which the computer program instructions are used for when executed by the processor 630, comprises: and determining at least one target key point based on the current labeling state of the model to be labeled and the current display angle of the model to be labeled.
Illustratively, the computer program instructions when executed by the processor 630 are further operable to perform the steps of: receiving an adjusting instruction input by a user for a first model, wherein the first model is one of a model to be annotated and a standard reference model, and the adjusting instruction comprises an instruction for indicating one or more operations of rotation, translation and zooming; and in response to the adjustment instruction, performing the corresponding operation indicated by the adjustment instruction on the first model.
Illustratively, the computer program instructions when executed by the processor 630 are further operable to perform the steps of: and in response to the adjusting instruction, performing operation consistent with the first model on a second model, wherein the second model is one of the model to be annotated and the standard reference model, and the second model is different from the first model.
Illustratively, the computer program instructions when executed by the processor 630 are further operable to perform the steps of: detecting the position of a cursor in real time; and if the cursor is positioned on the model to be marked, controlling the display device to display the set identifier at the intersection point of the cursor and the model to be marked in real time.
Illustratively, the position of each target keypoint of the M target keypoints is obtained by: receiving a marking instruction which is input by a user and aims at a model to be marked; and responding to the marking instruction, and determining that the current position of the intersection point of the cursor and the model to be marked is the position of one of the M target key points.
Illustratively, the computer program instructions when executed by the processor 630 are further operable to perform the steps of: receiving a mark cancellation instruction which is input by a user and aims at a model to be marked; and responding to the annotation canceling instruction, and controlling the display device to delete the identification of the target key point which is annotated last time in the M target key points on the model to be annotated.
Illustratively, the computer program instructions when executed by the processor 630 are further operable to perform the steps of: determining a next reference key point behind the M reference key points based on the current labeling state of the model to be labeled; and controlling the display device to display the identifier of the next reference key point on the standard reference model in a first mode and to display the identifiers of the other reference key points in a second mode.
Illustratively, the computer program instructions when executed by the processor 630 are further operable to perform the steps of: receiving a labeling skipping instruction which is input by a user and aims at a model to be labeled; and in response to the annotation skipping instruction, controlling the display device to modify the identifier of a first reference keypoint currently displayed in the first mode on the standard reference model to be displayed in the second mode, and to display the identifier of a second reference keypoint located after the first reference keypoint in the first mode on the standard reference model, the second reference keypoint being spaced from the first reference keypoint by a predetermined number of reference keypoints; or, in response to the instruction of skipping the annotation, controlling the display device to switch the currently displayed model to be annotated to another model to be annotated for display.
Illustratively, the computer program instructions when executed by the processor 630 are further operable to perform the steps of: when the labeling of all target key points of the model to be labeled at the current display angle is completed, rotating one or both of the model to be labeled and the standard model to the next display angle; and/or when the labeling of all target key points of the model to be labeled at the current display angle is completed, screenshot is carried out on the model to be labeled at the current display angle, and the intercepted image is stored.
Illustratively, the computer program instructions when executed by the processor 630 are further operable to perform the steps of: and when the preset condition appears in the label of the model to be labeled, outputting corresponding user prompt information based on the preset condition.
Illustratively, the predetermined conditions include: receiving a labeling skipping instruction aiming at the model to be labeled, and determining to switch the currently displayed model to be labeled into the next model to be labeled based on the labeling skipping instruction; the user prompt information comprises information for prompting to skip the currently displayed model to be marked; and/or the predetermined conditions include: when the received adjustment instruction for the model to be annotated indicates that the model to be annotated is rotated to the next display angle, and the annotation of the target key point of the model to be annotated at the current display angle is not completed; the user prompt message comprises a message for prompting that the annotation is not completed.
Illustratively, the step of controlling a display device to display the model to be annotated and the standard reference model, which the computer program instructions are for execution by the processor 630 when executed, comprises: and controlling the display device to display the model to be annotated and the standard reference model in the first display window and the second display window respectively.
Furthermore, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor are used for executing the corresponding steps of the keypoint labeling method according to the embodiment of the present invention and for implementing the corresponding modules in the keypoint labeling apparatus according to the embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media.
In one embodiment, the program instructions, when executed by a computer or a processor, may cause the computer or the processor to implement the functional modules of the keypoint labeling apparatus according to the embodiment of the present invention, and/or may execute the keypoint labeling method according to the embodiment of the present invention.
In one embodiment, the program instructions are operable when executed to perform the steps of: controlling a display device to display a model to be marked and a standard reference model, wherein M identifiers of reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1; acquiring the positions of M target key points determined by a user on a model to be marked according to the M reference key points; and controlling the display device to display the identifications of the M target key points on the model to be labeled based on the positions of the M target key points.
Illustratively, before the step of controlling the display means to display the model to be annotated and the standard reference model, the program instructions are further operable to perform the steps of: and determining M reference key points based on the current labeling state of the model to be labeled.
Illustratively, the standard reference model also displays the identifications of other reference key points, and adopts a point or a sphere of a first color as the identifications of the M reference key points and adopts a point or a sphere of a second color as the identifications of the other reference key points.
Illustratively, the model to be annotated also has displayed thereon an identification of at least one target keypoint that has been previously annotated, the program instructions being further operable, when executed, to perform the following steps, before the step of controlling the display means for execution when the program instructions are executed to display the model to be annotated and the standard reference model: and determining at least one target key point at least based on the current labeling state of the model to be labeled.
Illustratively, the step of determining at least one target keypoint based on at least the current annotation state of the model to be annotated, which the program instructions are adapted to execute at runtime, comprises: and determining at least one target key point based on the current labeling state of the model to be labeled and the current display angle of the model to be labeled.
Illustratively, the program instructions are further operable when executed to perform the steps of: receiving an adjusting instruction input by a user for a first model, wherein the first model is one of a model to be annotated and a standard reference model, and the adjusting instruction comprises an instruction for indicating one or more operations of rotation, translation and zooming; and in response to the adjustment instruction, performing the corresponding operation indicated by the adjustment instruction on the first model.
Illustratively, the program instructions are further operable when executed to perform the steps of: and in response to the adjusting instruction, performing operation consistent with the first model on a second model, wherein the second model is one of the model to be annotated and the standard reference model, and the second model is different from the first model.
Illustratively, the program instructions are further operable when executed to perform the steps of: detecting the position of a cursor in real time; and if the cursor is positioned on the model to be marked, controlling the display device to display the set identifier at the intersection point of the cursor and the model to be marked in real time.
Illustratively, the position of each target keypoint of the M target keypoints is obtained by: receiving a marking instruction which is input by a user and aims at a model to be marked; and responding to the marking instruction, and determining that the current position of the intersection point of the cursor and the model to be marked is the position of one of the M target key points.
Illustratively, the program instructions are further operable when executed to perform the steps of: receiving a mark cancellation instruction which is input by a user and aims at a model to be marked; and responding to the annotation canceling instruction, and controlling the display device to delete the identification of the target key point which is annotated last time in the M target key points on the model to be annotated.
Illustratively, the program instructions are further operable when executed to perform the steps of: determining a next reference key point behind the M reference key points based on the current labeling state of the model to be labeled; and controlling the display device to display the identifier of the next reference key point on the standard reference model in a first mode and to display the identifiers of the other reference key points in a second mode.
Illustratively, the program instructions are further operable when executed to perform the steps of: receiving a labeling skipping instruction which is input by a user and aims at a model to be labeled; and in response to the annotation skipping instruction, controlling the display device to modify the identifier of a first reference keypoint currently displayed in the first mode on the standard reference model to be displayed in the second mode, and to display the identifier of a second reference keypoint located after the first reference keypoint in the first mode on the standard reference model, the second reference keypoint being spaced from the first reference keypoint by a predetermined number of reference keypoints; or, in response to the instruction of skipping the annotation, controlling the display device to switch the currently displayed model to be annotated to another model to be annotated for display.
Illustratively, the program instructions are further operable when executed to perform the steps of: when the labeling of all target key points of the model to be labeled at the current display angle is completed, rotating one or both of the model to be labeled and the standard model to the next display angle; and/or when the labeling of all target key points of the model to be labeled at the current display angle is completed, screenshot is carried out on the model to be labeled at the current display angle and the intercepted image is stored
Illustratively, the program instructions are further operable when executed to perform the steps of: and when the preset condition appears in the label of the model to be labeled, outputting corresponding user prompt information based on the preset condition.
Illustratively, the predetermined conditions include: receiving a labeling skipping instruction aiming at the model to be labeled, and determining to switch the model to be labeled from a current model to a next model based on the labeling skipping instruction; the user prompt information includes information for prompting to skip the current model; and/or the predetermined conditions include: when the adjustment instruction for the model to be labeled indicates that the model to be labeled is rotated to the next display angle, labeling of the target key point at the current display angle in the at least one target key point is not completed; the user prompt message comprises a message for prompting that the annotation is not completed.
Illustratively, the step of controlling the display device to display the model to be annotated and the standard reference model, which the program instructions are adapted to execute when running, comprises: and controlling the display device to display the model to be annotated and the standard reference model in the first display window and the second display window respectively.
The modules in the keypoint labeling system according to the embodiment of the present invention may be implemented by a processor of an electronic device implementing keypoint labeling according to the embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer-readable storage medium of a computer program product according to the embodiment of the present invention are run by a computer.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some of the blocks in a keypoint tagging apparatus according to an embodiment of the invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (17)

1. A method of keypoint annotation comprising:
controlling a display device to display a model to be marked and a standard reference model, wherein M identifiers of reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1;
acquiring the positions of M target key points determined by a user on the model to be marked according to the M reference key points; and
controlling the display device to display the identifications of the M target key points on the model to be labeled based on the positions of the M target key points;
wherein the method further comprises:
determining a next reference key point after the M reference key points based on the current labeling state of the model to be labeled; and
controlling the display device to display the identifier of the next reference key point on the standard reference model in a first mode and to display the identifiers of other reference key points in a second mode;
wherein the method further comprises:
receiving a labeling skipping instruction which is input by the user and aims at the model to be labeled; and
in response to the annotation skipping instruction, controlling the display device to modify an identity of a first reference keypoint currently displayed in the first mode on the standard reference model to be displayed in the second mode, and to display an identity of a second reference keypoint located after the first reference keypoint in the first mode on the standard reference model, the second reference keypoint being spaced from the first reference keypoint by a predetermined number of reference keypoints; alternatively, the first and second electrodes may be,
and responding to the mark skipping instruction, and controlling the display device to switch the currently displayed model to be marked into another model to be marked for display.
2. The method of claim 1, wherein prior to said controlling the display device to display the model to be annotated and the standard reference model, the method further comprises:
and determining the M reference key points based on the current labeling state of the model to be labeled.
3. The method of claim 1, wherein the standard reference model further displays thereon identifications of other reference keypoints, wherein a point or sphere of a first color is used as the identification of the M reference keypoints, and a point or sphere of a second color is used as the identification of the other reference keypoints.
4. The method according to any one of claims 1 to 3, wherein the model to be labeled is further displayed with an identification of at least one target keypoint that has been labeled before,
before the controlling the display device to display the model to be annotated and the standard reference model, the method further comprises:
and determining the at least one target key point at least based on the current labeling state of the model to be labeled.
5. The method of claim 4, wherein the determining the at least one target keypoint based on at least the current annotation state of the model to be annotated comprises:
and determining the at least one target key point based on the current labeling state of the model to be labeled and the current display angle of the model to be labeled.
6. The method of any of claims 1 to 3, wherein the method further comprises:
receiving an adjustment instruction for a first model input by the user, wherein the first model is one of the model to be annotated and the standard reference model, and the adjustment instruction comprises an instruction for indicating one or more operations of rotation, translation and zooming; and
and responding to the adjusting instruction, and executing the corresponding operation indicated by the adjusting instruction on the first model.
7. The method of claim 6, wherein the method further comprises:
and responding to the adjusting instruction, and executing operation consistent with the first model on a second model, wherein the second model is one of the model to be labeled and the standard reference model, and the second model is different from the first model.
8. The method of claim 1, wherein the method further comprises:
detecting the position of a cursor in real time;
and if the cursor is positioned on the model to be marked, controlling the display device to display the set identifier at the intersection point of the cursor and the model to be marked in real time.
9. The method of claim 1 or 8, wherein the position of each of the M target keypoints is obtained by:
receiving a marking instruction which is input by the user and aims at the model to be marked;
and responding to the marking instruction, and determining that the current position of the intersection point of the cursor and the model to be marked is the position of one of the M target key points.
10. The method of any of claims 1 to 3, wherein the method further comprises:
receiving a label canceling instruction which is input by the user and aims at the model to be labeled; and
and responding to the annotation canceling instruction, and controlling the display device to delete the identification of the target key point which is annotated last time in the M target key points on the model to be annotated.
11. The method of any of claims 1 to 3, wherein the method further comprises:
when the labeling of all target key points of the model to be labeled at the current display angle is completed, rotating one or both of the model to be labeled and the standard model to the next display angle; and/or
And when the labeling of all target key points of the model to be labeled at the current display angle is finished, screenshot is carried out on the model to be labeled at the current display angle, and the intercepted image is stored.
12. The method of any of claims 1 to 3, wherein the method further comprises:
and when the preset condition appears in the label of the model to be labeled, outputting corresponding user prompt information based on the preset condition.
13. The method of claim 12, wherein,
the predetermined conditions include: receiving a labeling skipping instruction aiming at the model to be labeled, and determining to switch the currently displayed model to be labeled into the next model to be labeled based on the labeling skipping instruction; the user prompt information comprises information for prompting to skip the currently displayed model to be marked; and/or
The predetermined conditions include: rotating the model to be annotated to a next display angle under the received instruction of adjusting the model to be annotated, wherein the annotation of the target key point of the model to be annotated under the current display angle is not completed; the user prompt information comprises information for prompting that the annotation is not completed.
14. The method of any one of claims 1 to 3, wherein the controlling the display device to display the model to be annotated and the standard reference model comprises:
controlling the display device to display the model to be annotated and the standard reference model in a first display window and a second display window, respectively.
15. A keypoint annotation device comprising:
the first display control module is used for controlling the display device to display the model to be marked and the standard reference model, wherein M identifiers of the reference key points are displayed on the standard reference model, and M is an integer greater than or equal to 1;
the position acquisition module is used for acquiring the positions of M target key points determined by a user on the model to be labeled according to the M reference key points; and
the second display control module is used for controlling the display device to display the identifiers of the M target key points on the model to be annotated based on the positions of the M target key points;
wherein, the key point labeling device further comprises:
a third key point determining module, configured to determine, based on the current labeling state of the model to be labeled, a next reference key point after the M reference key points; and
a fifth display control module, configured to control the display device to display the identifier of the next reference keypoint in the standard reference model in the first mode, and to display the identifiers of other reference keypoints in the second mode;
wherein, the key point labeling device further comprises:
a skipping instruction receiving module, configured to receive a labeling skipping instruction for the model to be labeled, where the labeling skipping instruction is input by the user; and
a sixth display control module, configured to, in response to the annotation skipping instruction, control the display device to modify, on the standard reference model, an identifier of a first reference keypoint currently displayed in the first mode to be displayed in the second mode, and display, on the standard reference model, an identifier of a second reference keypoint located after the first reference keypoint in the first mode, the second reference keypoint being spaced from the first reference keypoint by a predetermined number of reference keypoints; or, in response to the instruction for skipping the annotation, controlling the display device to switch the currently displayed model to be annotated to another model to be annotated for display.
16. A keypoint annotation system comprising a display device, a processor and a memory, wherein said memory has stored therein computer program instructions for executing the keypoint annotation method according to any one of claims 1 to 14 when executed by said processor.
17. A storage medium having stored thereon program instructions for performing, when executed, a method of keypoint annotation according to any one of claims 1 to 14.
CN201711384795.0A 2017-12-20 2017-12-20 Key point marking method, device and system and storage medium Active CN108876934B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711384795.0A CN108876934B (en) 2017-12-20 2017-12-20 Key point marking method, device and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711384795.0A CN108876934B (en) 2017-12-20 2017-12-20 Key point marking method, device and system and storage medium

Publications (2)

Publication Number Publication Date
CN108876934A CN108876934A (en) 2018-11-23
CN108876934B true CN108876934B (en) 2022-01-28

Family

ID=64325701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711384795.0A Active CN108876934B (en) 2017-12-20 2017-12-20 Key point marking method, device and system and storage medium

Country Status (1)

Country Link
CN (1) CN108876934B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840592B (en) * 2018-12-24 2019-10-18 梦多科技有限公司 A kind of method of Fast Labeling training data in machine learning
CN109976614B (en) * 2019-03-28 2021-04-06 广州视源电子科技股份有限公司 Method, device, equipment and medium for marking three-dimensional graph
CN110210526A (en) * 2019-05-14 2019-09-06 广州虎牙信息科技有限公司 Predict method, apparatus, equipment and the storage medium of the key point of measurand
CN110110695B (en) * 2019-05-17 2021-03-19 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN111178266B (en) * 2019-12-30 2023-09-01 北京华捷艾米科技有限公司 Method and device for generating key points of human face
CN111310667B (en) * 2020-02-18 2023-09-01 北京小马慧行科技有限公司 Method, device, storage medium and processor for determining whether annotation is accurate
CN111626233B (en) * 2020-05-29 2021-07-13 江苏云从曦和人工智能有限公司 Key point marking method, system, machine readable medium and equipment
CN111695628B (en) * 2020-06-11 2023-05-05 北京百度网讯科技有限公司 Key point labeling method and device, electronic equipment and storage medium
CN111738180B (en) * 2020-06-28 2023-03-24 浙江大华技术股份有限公司 Key point marking method and device, storage medium and electronic device
CN114757250A (en) * 2020-12-29 2022-07-15 华为云计算技术有限公司 Image processing method and related equipment
CN112836302B (en) * 2021-03-04 2022-11-15 江南造船(集团)有限责任公司 Three-dimensional labeling method and system for ship wood cabin model based on 3DEXP platform
CN113010069A (en) * 2021-03-12 2021-06-22 浙江大华技术股份有限公司 Switching method and device for picture labels, electronic device and storage medium
CN112990032B (en) * 2021-03-23 2022-08-16 中国人民解放军海军航空大学航空作战勤务学院 Face image processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390282A (en) * 2013-07-30 2013-11-13 百度在线网络技术(北京)有限公司 Image tagging method and device
EP2750100A2 (en) * 2012-12-28 2014-07-02 Samsung Electronics Co., Ltd Image transformation apparatus and method
CN105184283A (en) * 2015-10-16 2015-12-23 天津中科智能识别产业技术研究院有限公司 Method and system for marking key points in human face images
CN106295567A (en) * 2016-08-10 2017-01-04 腾讯科技(深圳)有限公司 The localization method of a kind of key point and terminal
JP2017168077A (en) * 2016-03-09 2017-09-21 株式会社リコー Image processing method, display device, and inspection system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2750100A2 (en) * 2012-12-28 2014-07-02 Samsung Electronics Co., Ltd Image transformation apparatus and method
CN103390282A (en) * 2013-07-30 2013-11-13 百度在线网络技术(北京)有限公司 Image tagging method and device
CN105184283A (en) * 2015-10-16 2015-12-23 天津中科智能识别产业技术研究院有限公司 Method and system for marking key points in human face images
JP2017168077A (en) * 2016-03-09 2017-09-21 株式会社リコー Image processing method, display device, and inspection system
CN106295567A (en) * 2016-08-10 2017-01-04 腾讯科技(深圳)有限公司 The localization method of a kind of key point and terminal

Also Published As

Publication number Publication date
CN108876934A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108876934B (en) Key point marking method, device and system and storage medium
CN108875633B (en) Expression detection and expression driving method, device and system and storage medium
CN112738408B (en) Selective identification and ordering of image modifiers
US20200167995A1 (en) Textured mesh building
US9978174B2 (en) Remote sensor access and queuing
US11450051B2 (en) Personalized avatar real-time motion capture
CN114341780A (en) Context-based virtual object rendering
CN113330484A (en) Virtual surface modification
EP2972950B1 (en) Segmentation of content delivery
JP6022732B2 (en) Content creation tool
US20230116929A1 (en) Mirror-based augmented reality experience
WO2015102854A1 (en) Assigning virtual user interface to physical object
AU2014235416B2 (en) Real world analytics visualization
WO2016122973A1 (en) Real time texture mapping
US20210335004A1 (en) Texture-based pose validation
US11094079B2 (en) Determining a pose of an object from RGB-D images
CN112513875A (en) Ocular texture repair
US20220319231A1 (en) Facial synthesis for head turns in augmented reality content
US11640700B2 (en) Methods and systems for rendering virtual objects in user-defined spatial boundary in extended reality environment
US10366495B2 (en) Multi-spectrum segmentation for computer vision
EP3652704B1 (en) Systems and methods for creating and displaying interactive 3d representations of real objects
WO2019008186A1 (en) A method and system for providing a user interface for a 3d environment
WO2017147826A1 (en) Image processing method for use in smart device, and device
CN117193520A (en) Method and device for displaying handwriting input information in virtual world
WO2015131950A1 (en) Creating an animation of an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211207

Address after: 100080 room 1018, 10th floor, 1 Zhongguancun Street, Haidian District, Beijing

Applicant after: BEIJING KUANGSHI TECHNOLOGY Co.,Ltd.

Applicant after: Hangzhou kuangyun Jinzhi Technology Co., Ltd

Address before: 100190 A block 2, South Road, Haidian District Academy of Sciences, Beijing 313

Applicant before: BEIJING KUANGSHI TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant