CN110119722B - Method and apparatus for generating information - Google Patents

Method and apparatus for generating information Download PDF

Info

Publication number
CN110119722B
CN110119722B CN201910414613.2A CN201910414613A CN110119722B CN 110119722 B CN110119722 B CN 110119722B CN 201910414613 A CN201910414613 A CN 201910414613A CN 110119722 B CN110119722 B CN 110119722B
Authority
CN
China
Prior art keywords
key points
face
preset number
face contour
contour key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910414613.2A
Other languages
Chinese (zh)
Other versions
CN110119722A (en
Inventor
卢艺帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910414613.2A priority Critical patent/CN110119722B/en
Publication of CN110119722A publication Critical patent/CN110119722A/en
Application granted granted Critical
Publication of CN110119722B publication Critical patent/CN110119722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The embodiment of the disclosure discloses a method and a device for generating information. One embodiment of the method comprises: displaying the obtained labeled face image comprising the face area, wherein the labeled face image is labeled with a preset number of face contour key points in the face area; generating a smooth curve passing through a preset number of human face contour key points, wherein the positions of the human face contour key points in the preset number of human face contour key points are adjusted along with the adjustment of the smooth curve; in response to detecting the user adjustment operation, adjusting the smoothing curve based on the user adjustment operation; and generating coordinates of the current position of the face contour key points in the labeled face image in the preset number of face contour key points. The method and the device reduce the calculation load of the contour labeling key points aiming at the face area.

Description

Method and apparatus for generating information
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for generating information.
Background
Regarding the aspect of labeling key points of a face image, in many application scenarios, a large number of key points often need to be labeled on the contour of a face region included in the face image. At present, one of the related ways of labeling key points to the contour of a face region usually requires extracting a curve representing the contour of the face region from a face image, and then adjusting each pre-labeled key point to the obtained curve.
Disclosure of Invention
Embodiments of the present disclosure propose methods and apparatuses for generating information.
In a first aspect, an embodiment of the present disclosure provides a method for generating information, the method including: displaying the obtained labeled face image comprising the face area, wherein the labeled face image is labeled with a preset number of face contour key points in the face area; generating a smooth curve passing through a preset number of human face contour key points, wherein the positions of the human face contour key points in the preset number of human face contour key points are adjusted along with the adjustment of the smooth curve; in response to detecting the user adjustment operation, adjusting the smoothing curve based on the user adjustment operation; and generating coordinates of the current position of the face contour key points in the labeled face image in the preset number of face contour key points.
In some embodiments, after the adjusting the smoothing curve, the method further includes: determining the length of the adjusted smooth curve; and adjusting the preset number of the human face contour key points based on the length of the adjusted smooth curve so as to ensure that the preset number of the human face contour key points are uniformly distributed on the adjusted smooth curve.
In some embodiments, the generating a smooth curve passing through a preset number of key points of the face contour includes: and generating a smooth curve passing through preset number of key points of the face contour by a spline interpolation algorithm.
In some embodiments, after the adjusting the smoothing curve, the method further includes: in response to detecting the user identification operation, generating identification information for identifying visibility of the face contour keypoints from the preset number of face contour keypoints based on the user identification operation.
In some embodiments, before displaying the acquired annotated face image including the face region, the method further includes: acquiring a face image to be marked and coordinate information to be marked, wherein the face image to be marked comprises a face area, and the coordinate information to be marked comprises initial coordinates for marking face contour key points in a preset number of face contour key points; and marking a preset number of human face contour key points in the human face image to be marked based on the coordinate information to be marked to obtain the marked human face image.
In some embodiments, the above method further comprises: and displaying coordinates of the current position of the face contour key points in the labeled face image in the preset number of face contour key points in the labeled face image.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating information, the apparatus including: the first display unit is configured to display the acquired annotated face image comprising the face area, wherein the annotated face image is annotated with a preset number of face contour key points positioned in the face area; a first generating unit configured to generate a smooth curve passing through a preset number of face contour key points, wherein the positions of the face contour key points in the preset number of face contour key points are adjusted along with the adjustment of the smooth curve; a first adjusting unit configured to adjust the smooth curve based on a user adjustment operation in response to detecting the user adjustment operation; and the second generation unit is configured to generate the coordinates of the current position of the face contour key point in the annotated face image in the preset number of face contour key points.
In some embodiments, the above apparatus further comprises: a determining unit configured to determine a length of the adjusted smoothing curve; and the second adjusting unit is configured to adjust a preset number of the face contour key points based on the length of the adjusted smooth curve, so that the preset number of the face contour key points are uniformly distributed on the adjusted smooth curve.
In some embodiments, the first generating unit is further configured to: and generating a smooth curve passing through preset number of key points of the face contour by a spline interpolation algorithm.
In some embodiments, the above apparatus further comprises: a third generating unit configured to generate, in response to detecting the user identification operation, identification information for identifying visibility of the face contour keypoints of the preset number of face contour keypoints based on the user identification operation.
In some embodiments, the above apparatus further comprises: the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire a face image to be annotated comprising a face region and coordinate information to be annotated, wherein the coordinate information to be annotated comprises initial coordinates for annotating face contour key points in a preset number of face contour key points; and the marking unit is configured to mark a preset number of key points of the face contour in the face image to be marked based on the coordinate information to be marked to obtain the marked face image.
In some embodiments, the above apparatus further comprises: and the second display unit is configured to display the coordinates of the current position of the face contour key point in the labeled face image in the preset number of face contour key points in the labeled face image.
In a third aspect, an embodiment of the present disclosure provides a terminal, including: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method as described in any of the implementations of the first aspect.
The method and the device for generating information provided by the embodiment of the disclosure can display the acquired annotated face image including the face region, then can generate a smooth curve passing through a preset number of face contour key points, then can adjust the smooth curve based on the detected user adjustment operation after detecting the user adjustment operation, and further can generate coordinates of the current position of the face contour key point in the annotated face image in the preset number of face contour key points. Therefore, the calculation load of the contour labeling key points aiming at the face area is reduced.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for generating information, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for generating information in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a method for generating information according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for generating information according to the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary architecture 100 to which the method for generating information or the apparatus for generating information of the present disclosure may be applied.
As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal apparatus 101 interacts with the server 103 through the network 102 to receive or transmit messages and the like. Various communication client applications, such as a web browser application, a key point marking tool, a browser application, etc., may be installed on the terminal device 101.
The terminal apparatus 101 may be hardware or software. When the terminal device 101 is hardware, it may be various electronic devices having a display screen and supporting the key point labeling, including but not limited to a smart phone, a tablet computer, an e-book reader, a laptop portable computer, a desktop computer, and the like. When the terminal apparatus 101 is software, it can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 103 may be a server providing various services, such as a background server for a key annotation tool on the terminal device 101. As an example, the background server may store a large number of annotated face images including a face region in advance, and then the terminal device may acquire the annotated face images from the background server and process the acquired annotated face images to obtain processed data. Optionally, the terminal device may further feed back the obtained processed data to the background server.
It should be noted that, the above-mentioned a large number of annotated face images including the face region may also be directly stored in the local of the terminal device 101, and the terminal device 101 may directly extract and process the annotated face images stored in the local, and at this time, the server 103 may not exist.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It is noted that the method for generating information provided by the embodiment of the present disclosure is generally performed by the terminal device 101, and accordingly, the apparatus for generating information is generally disposed in the terminal device 101.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating information in accordance with the present disclosure is shown. The method for generating information comprises the following steps:
step 201, displaying the obtained annotated face image including the face region.
In the present embodiment, an execution subject (such as the terminal device 101 shown in fig. 1) of the method for generating information may acquire an annotated face image including a face region from a local or communication-connected server (such as the server 103 shown in fig. 1). And then, the execution subject can display the acquired annotated human face image.
The face area may be an area in which a face is displayed. The labeled face image is generally an image labeled with key points in advance and including a face region. In practice, the key points labeled in the labeled face image may include a preset number of face contour key points located in the face region. Here, the face contour key points are usually key points labeled for the contour of the face region.
It should be noted that, in some application scenarios, other regions (for example, a background region) may also be included in the annotated human face image.
In some optional implementations of the embodiment, before displaying the acquired annotated human face image, the executing body may further perform the following steps.
The method comprises the steps of firstly, obtaining a face image to be marked and coordinate information to be marked, wherein the face image to be marked comprises a face area.
The coordinate information to be annotated may include initial coordinates used for annotating key points in the face image to be annotated. For example, initial coordinates for labeling the face contour key points among the above-mentioned preset number of face contour key points may be included.
The execution main body can obtain the face image to be annotated and the coordinate information to be annotated from a server or a local place which are in communication connection.
And secondly, marking a preset number of human face contour key points in the human face image to be marked based on the coordinate information to be marked to obtain the marked human face image.
As an example, the execution subject may label the preset number of key points of the face contour in the face image to be labeled according to an initial coordinate included in the coordinate information to be labeled, so as to obtain the labeled face image.
As another example, the executing entity may further mark other key points in the facial image to be marked according to the initial coordinates included in the coordinate information to be marked, so as to obtain the marked facial image.
In these implementation manners, the obtained face image to be annotated is annotated with key points of the face contour to obtain the annotated face image.
Step 202, generating a smooth curve passing through a preset number of key points of the face contour.
In this embodiment, after the annotated face image is obtained, the execution subject may further generate a smooth curve passing through the preset number of key points of the face contour. In practice, the positions of the face contour key points in the preset number of face contour key points may be adjusted along with the adjustment of the smooth curve. For example, the position may be adjusted as the position of the smooth curve moves. For example, the position may be adjusted in accordance with the stretching of the smooth curve. It should be noted that, in the process of adjusting the smooth curve, the smooth curve still passes through the preset number of key points of the face contour.
As an example, the execution subject may input the labeled face image into a smooth curve generation model trained in advance, so as to obtain a smooth curve passing through the preset number of key points of the face contour. Here, the smooth curve generation model may be used to characterize the correspondence between the labeled image and the smooth curve. In practice, the smooth curve generation model can be built through an artificial neural network and then obtained through training of a large number of face images marked with key points of face contours.
In some optional implementations of this embodiment, the execution main body may generate a smooth curve passing through the preset number of key points of the face contour by using a spline interpolation algorithm. Here, the spline interpolation algorithm may include, but is not limited to: a quadratic spline interpolation algorithm and a cubic spline interpolation algorithm.
In response to detecting the user adjustment operation, the smoothing curve is adjusted based on the user adjustment operation, step 203.
In this embodiment, after generating the smooth curve, in response to detecting a user adjustment operation, the execution subject may adjust the smooth curve based on the detected user adjustment operation. In practice, the execution body may detect the user adjustment operation through an interface running thereon.
The user adjustment operation may include various operations performed by the user on the smooth curve, for example, but not limited to, any one of the following: and moving the smooth curve to stretch the smooth curve.
As an example, in response to detecting that the user moves the smooth curve, the execution body may move the smooth curve in a moving direction operated by the user. At this time, in the process of moving the smooth curve, the smooth curve still passes through the preset number of key points of the face contour.
As another example, in response to detecting that the user stretches the smooth curve, the execution body may stretch the smooth curve in a stretching direction operated by the user. At this time, in the process of stretching the smooth curve, the smooth curve still passes through the preset number of key points of the face contour.
In some optional implementations of this embodiment, after adjusting the smooth curve, in response to detecting a user identification operation, the executing body may further generate identification information for identifying visibility of the face contour keypoints in the preset number of face contour keypoints based on the detected user identification operation.
The user identification operation may include an operation in which the user identifies the key point as visible or invisible. The identification information may include information for identifying whether the key point is visible or invisible. In practice, the identification information may be embodied in various forms, for example, may include, but is not limited to, at least one of the following: numbers, pictures, letters, symbols, etc.
As an example, in response to detecting an operation of identifying a certain face contour keypoint of the preset number of face contour keypoints as invisible, the execution subject may generate identification information "(a, b): 0" for identifying that the face contour keypoint is invisible. Wherein, "(a, b)" is the coordinates of the face contour key point in the labeled face image, and "0" is used to identify that the face contour key point is invisible.
In these implementations, identification information for identifying the visibility of one or several face contour key points in the preset number of face contour key points may be generated according to actual requirements, and thus, the visibility of the one or several face contour key points in the labeled face image may be determined according to the generated one or more identification information.
And 204, generating coordinates of the current position of the face contour key points in the labeled face image in the preset number of face contour key points.
In this embodiment, after the smooth curve is adjusted, the positions of the preset number of key points of the face contour may be adjusted. Therefore, the execution subject may further determine a current location of each face contour key point in the preset number of face contour key points, and further generate coordinates of the current location of each face contour key point.
In some optional implementation manners of this embodiment, after generating the coordinates of the current location of each of the preset number of face contour key points, the execution main body may display, in the annotated face image, the coordinates of the current location of each face contour key point in the annotated face image. Therefore, the display of the coordinates of the current position of each face contour key point in the generated preset number of face contour key points is realized.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for generating information according to the present embodiment. In the application scenario of fig. 3, a tool for labeling key points on an image is running on the terminal device 301.
First, the terminal device 301 may locally acquire an annotated face image 302 including a face region, and display the acquired annotated face image 302. Wherein, the labeled face image 302 is labeled with the face contour key points 303 and 309 located in the face region. Then, optionally, the terminal device 301 may generate a smooth curve 310 passing through the face contour key points 303 and 309 through a quadratic spline interpolation algorithm. Then, in response to detecting an operation of the user to stretch the smooth curve 310, the terminal device 301 may stretch the smooth curve 310 according to the stretching direction operated by the user. Further, the terminal device 301 may determine the current position of the face contour key point 303 and 309 in the annotated face image 302, and generate the coordinates of the current position of the face contour key point 303 and 309.
At present, in the aspect of labeling key points to a contour of a face region, firstly, various algorithms (for example, various pre-trained models) are required to extract contour features of the face region from a face image, then, a curve representing the contour of the face region is obtained according to the extracted contour features, and then, each pre-labeled key point is sequentially adjusted to the obtained curve. In practice, extracting the contour features of the face region by the correlation algorithm tends to increase the computational load of the executing subject. The method provided by the embodiment of the disclosure can adjust the positions of the preset number of face contour key points labeled in the labeled face image by adjusting the generated smooth curve. In the process, the contour features of the face region do not need to be extracted, so that the calculation load can be reduced to a certain extent.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for generating information is shown. The flow 400 of the method for generating information comprises the steps of:
step 401, displaying the obtained annotated face image including the face region.
Step 402, generating a smooth curve passing through a preset number of key points of the face contour.
In response to detecting the user adjustment operation, the smoothing curve is adjusted based on the user adjustment operation, step 403.
The steps 401, 402, 403 are consistent with the steps 201, 202, 203, and the above description for the steps 201, 202, 203 is also applicable to the steps 401, 402, 403, and is not repeated here.
At step 404, the length of the adjusted smoothing curve is determined.
In the present embodiment, after adjusting the generated smoothing curve, the execution subject of the method for generating information (e.g., the terminal device 101 shown in fig. 1) may determine the length of the adjusted smoothing curve.
Specifically, first, the execution body may select a point at which one end of the adjusted smooth curve is located, and then, may select one point on the adjusted smooth curve at every predetermined length of a straight-line distance using the selected point as a starting point until the remaining length of the adjusted smooth curve is not enough to select another point, wherein the predetermined length is generally a small value (e.g., 0.0001) between 0 and 1, since the predetermined length is small, that is, the distance between any two adjacent points is small, the execution body may use the straight-line distance (i.e., the predetermined length) between the points and the length of the adjusted smooth curve, as an example, the predetermined length is α, and the number of the selected points is T, the length of the adjusted smooth curve may be (T-1) α.
Step 405, based on the length of the adjusted smooth curve, adjusting a preset number of face contour key points, so that the preset number of face contour key points are uniformly distributed on the adjusted smooth curve.
In this embodiment, after the length of the adjusted smooth curve is determined, on the premise that the positions of two face contour key points at two ends of the adjusted smooth curve are not changed, the execution main body may adjust the positions of the remaining face contour key points, so that the preset number of face contour key points are uniformly distributed on the adjusted smooth curve.
For example, the length of the adjusted smooth curve is L, and the preset number is M, so that the curve length between the key points of the adjacent face contours is L/(M-1), and then, the positions of the other key points of the face contours, which are located outside the two ends of the adjusted smooth curve, can be sequentially adjusted, so that the curve length between the key points of the adjacent face contours is the determined curve length.
Step 406, generating coordinates of the current position of the face contour key points in the labeled face image in a preset number of face contour key points.
In this embodiment, after the preset number of face contour key points are uniformly distributed on the adjusted smooth curve, the executing body may generate coordinates of the current position of each face contour key point in the preset number of face contour key points by using a method similar to that described in step 204.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for generating information in this embodiment embodies the step of determining the length of the adjusted smooth curve, and also embodies the step of adjusting the preset number of key points of the face contour to be uniformly distributed on the adjusted smooth curve. Therefore, in the scheme described in this embodiment, by adjusting the smooth curve to the position where the contour of the face region is located, the key points of the face contour can be adjusted to the positions where the contour uniformly distributed in the face region is located.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for generating information provided by the present embodiment includes a first display unit 501, a first generating unit 502, a first adjusting unit 503, and a second generating unit 504. Wherein the first display unit 501 may be configured to: and displaying the obtained labeled face image comprising the face area, wherein the labeled face image is labeled with a preset number of face contour key points positioned in the face area. The first generating unit 502 may be configured to: and generating a smooth curve passing through a preset number of the human face contour key points, wherein the positions of the human face contour key points in the preset number of the human face contour key points are adjusted along with the adjustment of the smooth curve. The first adjusting unit 503 may be configured to: in response to detecting the user adjustment operation, the smoothing curve is adjusted based on the user adjustment operation. The second generating unit 504 may be configured to: and generating coordinates of the current position of the face contour key points in the labeled face image in the preset number of face contour key points.
In the present embodiment, in the apparatus 500 for generating information: the detailed processing of the first display unit 501, the first generation unit 502, the first adjustment unit 503, and the second generation unit 504 and the technical effects thereof can refer to the related descriptions of step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the apparatus 500 may further include: a determination unit (not shown in the figure) and a second adjustment unit (not shown in the figure). Wherein the determining unit may be configured to: the length of the adjusted smoothing curve is determined. The second adjusting unit may be configured to: and adjusting the preset number of the human face contour key points based on the length of the adjusted smooth curve so as to ensure that the preset number of the human face contour key points are uniformly distributed on the adjusted smooth curve.
In some optional implementations of the present embodiment, the first generating unit 502 may be further configured to: and generating a smooth curve passing through preset number of key points of the face contour by a spline interpolation algorithm.
In some optional implementations of this embodiment, the apparatus 500 may further include: a third generation unit (not shown in the figure). Wherein the third generating unit may be configured to: in response to detecting the user identification operation, generating identification information for identifying visibility of the face contour keypoints from the preset number of face contour keypoints based on the user identification operation.
In some optional implementations of this embodiment, the apparatus 500 may further include: an acquisition unit (not shown in the figure) and a labeling unit (not shown in the figure). Wherein the obtaining unit may be configured to: the method comprises the steps of obtaining a face image to be marked and coordinate information to be marked, wherein the face image to be marked comprises a face area, and the coordinate information to be marked comprises initial coordinates used for marking face contour key points in a preset number of face contour key points. The labeling unit may be configured to: and marking a preset number of human face contour key points in the human face image to be marked based on the coordinate information to be marked to obtain the marked human face image.
In some optional implementations of this embodiment, the apparatus 500 may further include: a second display unit (not shown). Wherein the second display unit may be configured to: and displaying coordinates of the current position of the face contour key points in the labeled face image in the preset number of face contour key points in the labeled face image.
The apparatus provided in the foregoing embodiment of the present disclosure may first display the acquired annotated face image including the face region through the display unit 501, then may generate a smooth curve passing through a preset number of face contour key points through the first generation unit 502, then may adjust the smooth curve through the first adjustment unit 503 based on the detected user adjustment operation after detecting the user adjustment operation, and then may generate coordinates of a current position of the face contour key point in the preset number of face contour key points in the annotated face image through the second generation unit 504. Therefore, the calculation load of the contour labeling key points aiming at the face area is reduced.
Referring now to fig. 6, shown is a schematic diagram of an electronic device (e.g., terminal device in fig. 1) 600 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 607 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., storage devices 608 including, for example, magnetic tape, hard disk, etc., and communication devices 609.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device. The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: displaying the obtained labeled face image comprising the face area, wherein the labeled face image is labeled with a preset number of face contour key points in the face area; generating a smooth curve passing through a preset number of human face contour key points, wherein the positions of the human face contour key points in the preset number of human face contour key points are adjusted along with the adjustment of the smooth curve; in response to detecting the user adjustment operation, adjusting the smoothing curve based on the user adjustment operation; and generating coordinates of the current position of the face contour key points in the labeled face image in the preset number of face contour key points.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first display unit, a first generation unit, a first adjustment unit, and a second generation unit. The names of the units do not in some cases constitute a limitation to the units themselves, and for example, the first display unit may also be described as a "unit displaying the acquired annotated face image including the face region".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. A method for generating information, comprising:
displaying the obtained labeled face image comprising a face area, wherein the labeled face image is labeled with a preset number of face contour key points in the face area;
generating a smooth curve passing through the preset number of the human face contour key points, wherein the positions of the human face contour key points in the preset number of the human face contour key points are adjusted along with the adjustment of the smooth curve;
in response to detecting a user adjustment operation, adjusting the smooth curve based on the user adjustment operation, wherein the user adjustment operation comprises an operation performed on the smooth curve by a user;
generating coordinates of the current positions of the face contour key points in the labeled face images in the preset number of face contour key points;
wherein after the adjusting the smoothing curve, the method further comprises:
determining the length of the adjusted smooth curve;
and adjusting the preset number of the human face contour key points based on the length of the adjusted smooth curve so as to ensure that the preset number of the human face contour key points are uniformly distributed on the adjusted smooth curve.
2. The method of claim 1, wherein the generating a smooth curve that passes through the preset number of face contour key points comprises:
and generating a smooth curve passing through the preset number of key points of the face contour by a spline interpolation algorithm.
3. The method of claim 1, wherein after the adjusting the smoothing curve, the method further comprises:
in response to detecting a user identification operation, generating identification information for identifying visibility of face contour keypoints in the preset number of face contour keypoints based on the user identification operation.
4. The method according to any one of claims 1-3, wherein prior to said displaying the acquired annotated face image comprising a face region, the method further comprises:
acquiring a face image to be marked comprising a face area and coordinate information to be marked, wherein the coordinate information to be marked comprises initial coordinates for marking face contour key points in the preset number of face contour key points;
and marking the preset number of human face contour key points in the human face image to be marked based on the coordinate information to be marked to obtain the marked human face image.
5. The method according to any one of claims 1-3, wherein the method further comprises:
and displaying coordinates of the current positions of the face contour key points in the labeled face image in the preset number of face contour key points in the labeled face image.
6. An apparatus for generating information, comprising:
the first display unit is configured to display the acquired annotated face image comprising a face area, wherein the annotated face image is marked with a preset number of face contour key points positioned in the face area;
a first generating unit configured to generate a smooth curve passing through the preset number of face contour key points, wherein positions of face contour key points in the preset number of face contour key points are adjusted along with adjustment of the smooth curve;
a first adjusting unit configured to adjust the smooth curve based on a user adjustment operation in response to detection of the user adjustment operation, wherein the user adjustment operation includes an operation performed on the smooth curve by a user;
a second generating unit configured to generate coordinates of a current position of a face contour key point in the annotated face image, from among the preset number of face contour key points;
wherein the apparatus further comprises:
a determining unit configured to determine a length of the adjusted smoothing curve;
and the second adjusting unit is configured to adjust the preset number of the face contour key points based on the length of the adjusted smooth curve, so that the preset number of the face contour key points are uniformly distributed on the adjusted smooth curve.
7. The apparatus of claim 6, wherein the first generating unit is further configured to:
and generating a smooth curve passing through the preset number of key points of the face contour by a spline interpolation algorithm.
8. The apparatus of claim 6, wherein the apparatus further comprises:
a third generating unit configured to generate, in response to detection of a user identification operation, identification information for identifying visibility of face contour keypoints of the preset number of face contour keypoints based on the user identification operation.
9. The apparatus of any of claims 6-8, wherein the apparatus further comprises:
an obtaining unit configured to obtain a face image to be annotated including a face region and coordinate information to be annotated, wherein the coordinate information to be annotated includes initial coordinates for annotating face contour key points of the preset number of face contour key points;
and the labeling unit is configured to label the preset number of human face contour key points in the human face image to be labeled based on the coordinate information to be labeled to obtain the labeled human face image.
10. The apparatus of any of claims 6-8, wherein the apparatus further comprises:
and the second display unit is configured to display the coordinates of the current positions of the face contour key points in the labeled face image in the preset number of face contour key points in the labeled face image.
11. A terminal, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910414613.2A 2019-05-17 2019-05-17 Method and apparatus for generating information Active CN110119722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910414613.2A CN110119722B (en) 2019-05-17 2019-05-17 Method and apparatus for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910414613.2A CN110119722B (en) 2019-05-17 2019-05-17 Method and apparatus for generating information

Publications (2)

Publication Number Publication Date
CN110119722A CN110119722A (en) 2019-08-13
CN110119722B true CN110119722B (en) 2020-07-24

Family

ID=67522775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910414613.2A Active CN110119722B (en) 2019-05-17 2019-05-17 Method and apparatus for generating information

Country Status (1)

Country Link
CN (1) CN110119722B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910308B (en) * 2019-12-03 2024-03-05 广州虎牙科技有限公司 Image processing method, device, equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050392A1 (en) * 2012-08-15 2014-02-20 Samsung Electronics Co., Ltd. Method and apparatus for detecting and tracking lips
CN107767326B (en) * 2017-09-28 2021-11-02 北京奇虎科技有限公司 Method and device for processing object transformation in image and computing equipment
CN109409262A (en) * 2018-10-11 2019-03-01 北京迈格威科技有限公司 Image processing method, image processing apparatus, computer readable storage medium
CN109461117B (en) * 2018-10-30 2023-11-24 维沃移动通信有限公司 Image processing method and mobile terminal
CN109584152A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN110119722A (en) 2019-08-13

Similar Documents

Publication Publication Date Title
CN109816589B (en) Method and apparatus for generating cartoon style conversion model
CN110162670B (en) Method and device for generating expression package
CN109858445B (en) Method and apparatus for generating a model
CN109993150B (en) Method and device for identifying age
CN109829432B (en) Method and apparatus for generating information
EP3893125A1 (en) Method and apparatus for searching video segment, device, medium and computer program product
CN109981787B (en) Method and device for displaying information
CN110059623B (en) Method and apparatus for generating information
CN110163171B (en) Method and device for recognizing human face attributes
CN109800730B (en) Method and device for generating head portrait generation model
CN110516678B (en) Image processing method and device
CN108510084B (en) Method and apparatus for generating information
CN111210485B (en) Image processing method and device, readable medium and electronic equipment
CN110427915B (en) Method and apparatus for outputting information
US20210200971A1 (en) Image processing method and apparatus
CN109934142B (en) Method and apparatus for generating feature vectors of video
CN112749695A (en) Text recognition method and device
CN111897950A (en) Method and apparatus for generating information
CN110188660B (en) Method and device for identifying age
CN112488095A (en) Seal image identification method and device and electronic equipment
CN109829431B (en) Method and apparatus for generating information
CN109919220B (en) Method and apparatus for generating feature vectors of video
CN110119722B (en) Method and apparatus for generating information
CN111292333A (en) Method and apparatus for segmenting an image
CN110189364B (en) Method and device for generating information, and target tracking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

CP01 Change in the name or title of a patent holder