CN112183388A - Image processing method, apparatus, device and medium - Google Patents

Image processing method, apparatus, device and medium Download PDF

Info

Publication number
CN112183388A
CN112183388A CN202011058037.1A CN202011058037A CN112183388A CN 112183388 A CN112183388 A CN 112183388A CN 202011058037 A CN202011058037 A CN 202011058037A CN 112183388 A CN112183388 A CN 112183388A
Authority
CN
China
Prior art keywords
nail
hand
image
determining
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011058037.1A
Other languages
Chinese (zh)
Inventor
黄佳斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202011058037.1A priority Critical patent/CN112183388A/en
Publication of CN112183388A publication Critical patent/CN112183388A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Abstract

The embodiment of the disclosure discloses an image processing method, an image processing device, image processing equipment and a computer readable medium. One embodiment of the method comprises: determining at least one hand keypoint in the hand image; determining a nail image in the hand image based on the at least one hand key point; and determining nail related information based on the nail image, wherein the related information comprises nail key points and nail regions. This embodiment generates nail related information by hand key points. Because in the hand image the hand key points are easily detectable. Furthermore, the nail image can be quickly and accurately determined according to the key points of the hand. Accordingly, the information related to the nail determined from the nail image is also accurate.

Description

Image processing method, apparatus, device and medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to an image processing method, an apparatus, a device, and a computer-readable medium.
Background
At present, adding special effects to images according to user requirements is a very popular field. For example, some special effects may be added to the user's nail. For example, the user's nail may be dyed, lengthened, etc. In order to make the image effect good and accurately find the relevant nail information, the relevant nail information may refer to nail key points and nail regions.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose an image processing method, apparatus, device and computer readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an image processing method, including: determining at least one hand keypoint in the hand image; determining a nail image in the hand image based on the at least one hand key point; and determining nail related information based on the nail image, wherein the related information comprises nail key points and nail regions.
In a second aspect, some embodiments of the present disclosure provide an image processing apparatus comprising: a first determination unit configured to determine at least one hand keypoint in a hand image; a second determination unit configured to determine a fingernail image in the hand image based on the at least one hand key point; a third determining unit configured to determine nail related information based on the nail image, wherein the related information includes a nail key point and a nail region.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as in any one of the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: at least one hand keypoint is determined in the hand image, and a nail image is determined in the hand image according to the at least one hand keypoint. Here, when searching for a nail image, the nail image is specified based on the hand key point. Because in the hand image the hand key points are easily detectable. Furthermore, the nail image can be quickly and accurately determined according to the key points of the hand. And determining the related information of the nail according to the accurate nail image. Wherein the related information comprises nail key points and nail regions. Here, since the nail image is accurate, the nail related information determined from the nail image should be accurate accordingly.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of an image processing method according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram of one application scenario of an image processing method according to some embodiments of the present disclosure;
FIG. 3 is a flow diagram of some embodiments of an image processing method according to the present disclosure;
FIG. 4 is a flow diagram of further embodiments of an image processing method according to the present disclosure;
FIG. 5 is a schematic block diagram of some embodiments of an image processing apparatus according to the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure;
FIG. 7 is a schematic illustration of the step of determining finger keypoints according to some embodiments of the image processing method of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 and 2 are schematic diagrams of application scenarios of image processing methods according to some embodiments of the present disclosure.
In diagram 100. As shown, the electronic device 101 may determine at least one hand keypoint in a given hand image 102. Here, the number of the at least one hand key point may be any number. For example, the number of the hand key point maps 103 may be 15 or 21. With continued reference to FIG. 2, in diagram 200, from the 15 hand keypoints, the electronic device 101 can determine a nail image. Here, the nail images may be a little finger nail image 202, a ring finger nail image 203, a middle finger nail image 204, an index finger nail image 205, and a thumb nail image 206 as in the drawing. And finally, determining the related information of the fingernail according to the fingernail image. Wherein the related information comprises a nail key point and a nail region.
It is understood that the image processing method may be performed by the electronic device 101 described above. The electronic device 101 may be hardware or software. When the electronic device 101 is hardware, it may be various electronic devices with information processing capabilities, including but not limited to smartphones, tablets, e-book readers, laptop portable computers, desktop computers, servers, and the like. When the electronic device 101 is software, it can be installed in the electronic devices listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of electronic devices in fig. 1 is merely illustrative. There may be any number of electronic devices, as desired for implementation.
With continued reference to fig. 3, a flow 300 of some embodiments of an image processing method according to the present disclosure is shown. The image processing method comprises the following steps:
at step 301, at least one hand keypoint is determined in a hand image.
In some embodiments, the execution subject (e.g., the electronic device shown in fig. 1) determines the at least one hand keypoint in the hand image in various ways. Here, the hand image may be an image in which a hand is arbitrarily displayed. The hand key points can be used for representing coordinate information of hand bone nodes. Here, the number of the hand key points may be a predetermined number. For example, the number of hand keypoints may be 21.
As an example, the executive agent may input the hand image into a pre-trained deep neural network, resulting in 21 hand keypoints.
In some optional implementations of some embodiments, the performing subject may input a hand image into a pre-trained hand keypoint detection model, generating the at least one hand keypoint.
As an example, the above-mentioned hand keypoint detection model may include, but is not limited to, at least one of: MobileNet, ShuffleNet. On this basis, the hand key point detection model may further include the following sub-network structure: an inverse residual network structure, a channel switching network structure, a channel segmentation network structure, and a separable convolution network structure.
The sample data used in the hand keypoint detection model may be obtained in various ways. The sample data may be an image including a hand. In practice, sample data can be obtained by photographing through a mobile phone. Or sample data may be obtained from a database. Here, the sample data includes pictures in which a hand is displayed in various scenes (for example, indoors and outdoors). Meanwhile, the sample data also displays various forms, angles and appearances of the hand which may appear.
After the hand picture is collected, the positions of all key points of the hand picture can be marked. In practice, the hand bone nodes of the hand displayed in the image need to be labeled.
In practice, training the hand keypoint detection model may use an adam optimizer. Meanwhile, hyper-parameters in training can be set. For example, the learning rate may be set to 0.01, and the penalty of the weight may be set to 10-5Size. The trend of the decline of the learning rate can be represented by a multi-step curve, and the number of samples per training round can be 96.
The flow of the method for detecting the hand key points by using the hand key point detection model can be roughly summarized as follows: inputting the hand image into the hand key point detection model, and obtaining a probability map of each position in the hand image being a hand key point. The probability map may be a matrix. The value at each position in the matrix lies between 0 and 1, indicating the probability that the position exhibits a hand keypoint.
In practice, each position of the hand image can be traversed, and the position with the probability value meeting the preset threshold value is found. These locations are determined as hand keypoints.
In some embodiments, the hand image may be annotated from the target image by receiving a manually entered annotated target image prior to determining the at least one hand keypoint in the hand image.
In some optional implementations of some embodiments, determining the hand image in the target image may further be obtained by:
the first step is as follows: the execution subject may input the target image to a hand detection model trained in advance, and obtain position information including the hand image.
By way of example, the hand detection model may include, but is not limited to, at least one of: MobileNet, ShuffleNet. On this basis, the hand detection model may further include the following sub-network structure: an inverse residual network structure, a channel switching network structure, a channel segmentation network structure, and a separable convolution network structure.
In practice, the target image is input into a pre-trained hand detection model, and two pieces of information can be obtained after model processing. The first is the probability map. Wherein the probability map may be a matrix. The value at each position in the matrix lies between 0 and 1, indicating the probability that the position shows a hand. Here, a threshold value, such as 0.6, may be set. In response to there being a location in the matrix with a probability value greater than 0.6, then this place may be considered to be a hand. This allows the rough position P of the hand to be preliminarily determined. At the same time, the model also outputs an offset from position P and includes the length and width of the hand box. The offset amount is used to finely adjust the approximate hand position P so that the hand position is changed from an approximate position to a more accurate position. Finally, a frame including the hand image is determined. That is, the position information of the hand image is determined.
Sample data used by the hand detection model can be obtained in various ways. The sample data may be an image including a hand. In practice, sample data can be obtained by photographing through a mobile phone. Or sample data may be obtained from a database. Here, the sample data includes pictures in which a hand is displayed in various scenes (for example, indoors and outdoors). Meanwhile, the sample data also displays various forms, angles and appearances of the hand which may appear.
After the hand picture is collected, the hand picture can be labeled. In practice, the mark is needed to include the position of the hand and the length and width of the hand.
In practice, training the hand keypoint detection model may use an adam optimizer. Meanwhile, hyper-parameters in training can be set. For example, the learning rate may be set to 0.01, and the penalty of the weight may be set to 10-5Size. The trend of the decline of the learning rate can be represented by a multi-step curve, and the number of samples per training round can be 96.
The second step is that: the target image may be clipped according to the position information to obtain the hand image.
By adopting the embodiments of the implementation mode, the hand detection model is used, so that the human resources are saved, and meanwhile, the image area displaying the hand can be accurately determined in the target image. And a basis is provided for more accurate subsequent identification of more detailed information in the target image.
Step 302, determining a fingernail image in the hand image based on the at least one hand keypoint.
In some embodiments, the executing agent may use various methods to determine a nail image in the hand image based on the at least one hand keypoint. As an example, the execution subject may determine the nail image from at least one of the hand key points using a manual labeling method.
In some optional implementations of some embodiments, determining a nail image in the hand image based on the at least one hand keypoint may further be obtained by:
the first step is as follows: as shown in FIG. 7, FIG. 7 illustrates a step 700 of determining finger keypoints according to some embodiments of the image processing method of the present disclosure. The execution body may select a predetermined number of hand keypoints from the at least one hand keypoint as finger keypoints. For example, 5 hand keypoints may be selected from the at least one hand keypoint as finger keypoints. In practice, finger keypoints often represent the location of the finger nail.
The second step is that: according to the finger key point, the execution body may draw a circle around the key point with the key point as a center, thereby enclosing an image including a nail. Then, the circled nail image is cut out, and finally, the nail image is generated.
In some optional implementations of some embodiments, the nail image and the category of the nail image are determined in the hand image according to the at least one hand keypoint. Here, first, a hand key point for characterizing the fingernail position information may be selected among the at least one hand key point. Since the selected finger keypoints carry information of the nail category, e.g., thumb, middle finger, index finger, little finger, ring finger. Therefore, the type of the nail image can be specified.
Step 303, determining the information related to the fingernail based on the fingernail image.
In some embodiments, the executing subject may determine the information related to the fingernail in various ways according to the fingernail image. Here, the above-mentioned related information includes a nail key point and a nail region. Here, the nail key points may be used to represent position information of the nail. In practice, the nail keypoints may be multiple, for example 8.
As an example, the execution subject may find nail key points and nail regions from the nail image by a manual labeling method.
In some optional implementations of some embodiments, the executing subject may input the nail image into a pre-trained nail keypoint detection and segmentation model, generating the fingernail region and the fingernail keypoints.
As an example, the above-mentioned nail keypoint detection and segmentation model may include, but is not limited to, at least one of: MobileNet, ShuffleNet. On this basis, the above nail key point detection and segmentation model may further include the following sub-network structures: an inverse residual network structure, a channel switching network structure, a channel segmentation network structure, and a separable convolution network structure.
The sample data used in the nail keypoint detection module may be obtained in various ways. Wherein the sample data may be an image including a nail. In practice, sample data can be obtained by photographing through a mobile phone. Or sample data may be obtained from a database. Here, the sample data includes pictures on which a nail is displayed under various scenes (e.g., indoors and outdoors). Meanwhile, the sample data also shows various shapes, angles and appearances of the nail which may appear.
After the nail picture is collected, the nail picture can be labeled. In practice, the positions of the key points of the nail need to be marked.
In practice, the training nail keypoint detection module may use an adam optimizer. Meanwhile, hyper-parameters in training can be set. For example, the learning rate may be set to 0.01, and the penalty of the weight may be set to 10-5Size. The trend of the decline of the learning rate can be represented by a multi-step curve, and the number of samples per training round can be 96.
The flow of the method for detecting the nail key points using the above nail key point detecting module may be roughly summarized as follows: the nail image is input into the nail key point detection module, and a probability map of each position in the nail image, which is the nail key point, is obtained. The probability map may be a matrix. The value at each position in the matrix lies between 0 and 1, indicating the probability that the position exhibits a nail keypoint. In practice, each position of the nail image can be traversed to find a position with a probability value meeting a preset threshold. These locations are determined as nail keypoints.
The sample data used in the nail segmentation module may be obtained in various ways. Wherein the sample data may be an image including a nail. In practice, sample data can be obtained by photographing through a mobile phone. Or sample data may be obtained from a database. Here, the sample data includes pictures on which a nail is displayed under various scenes (e.g., indoors and outdoors). Meanwhile, the sample data also shows various shapes, angles and appearances of the nail which may appear.
After the nail picture is collected, the nail picture can be labeled. In practice, the area where the nail is located needs to be marked.
In practice, the training nail segmentation module may use an adam optimizer. Meanwhile, hyper-parameters in training can be set. For example, the learning rate may be set to 0.01, and the penalty of the weight may be set to 10-7Size. The trend of the decline of the learning rate can be represented by a multi-step curve, and the number of samples per training round can be 96.
The procedure for using the above-described nail segmentation module can be summarized roughly as follows: the nail image is input into a nail segmentation module, and a probability map of each position in the nail image being a nail region is obtained. The probability map may be a matrix. The value at each location in the matrix lies between 0 and 1, indicating the probability that the location exhibits a nail region. In practice, each position of the nail image can be traversed to find a position with a probability value meeting a preset threshold. These positions are determined as nail regions.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: at least one hand keypoint is determined in the hand image, and a nail image is determined in the hand image according to the at least one hand keypoint. Here, when searching for a nail image, the nail image is specified based on the hand key point. Because in the hand image the hand key points are easily detectable. Furthermore, the nail image can be quickly and accurately determined according to the key points of the hand. And determining the related information of the nail according to the accurate nail image. Wherein the related information comprises nail key points and nail regions. Here, since the nail image is accurate, the nail related information determined from the nail image should be accurate accordingly.
With further reference to fig. 4, a flow 400 of further embodiments of an image processing method is shown. The flow 400 of the image processing method comprises the following steps:
step 401, determining a fingernail image in a hand image based on at least one hand key point.
Step 402, determining related information of the fingernail based on the fingernail image, wherein the related information comprises a fingernail key point and a fingernail area.
In some embodiments, the detailed implementation of steps 401 to 402 and the technical effect brought by the implementation may refer to steps 301 to 302 in those embodiments corresponding to fig. 3, and are not described herein again.
In some embodiments, the fingernail-related information may further include fingernail verification information.
And step 403, in response to that the nail verification information does not meet the preset condition, discarding the nail key points and the nail region.
In some embodiments, the execution body may discard the nail key point and the nail region in response to the nail verification information not meeting a preset condition. Here, the nail verification information is used to indicate whether or not a nail is displayed in the nail image. In practice, the nail verification information may be a probability. When the probability is high, it is considered that a nail is displayed in the nail image. Here, the preset condition may be that it is greater than a preset threshold. For example, greater than 70%.
As an example, in response to the nail verification information being not greater than 70%, the above nail key point and the above nail region are discarded.
As can be seen from the figures, compared with the description of some embodiments corresponding to fig. 3, the flow 400 of the image processing method in some embodiments corresponding to fig. 4 embodies that nail verification information is generated at the same time as the nail key points and the nail region are generated. Here, the obtained nail image may be verified by the nail verification information, and the nail key point and the nail region corresponding to the nail image are retained only if the verification information meets a preset condition. And vice versa. Through the verification information, the influence of the non-nail region on the final result can be effectively avoided. By adding the verification information, the final nail key points and the nail area are more credible.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of an image processing apparatus, which correspond to those shown in fig. 2, and which may be applied in particular in various electronic devices.
As shown in fig. 5, an image processing apparatus 500 of some embodiments includes: a first determining unit 501, a second determining unit 502 and a third determining unit 503. Wherein the first determination unit is configured to determine at least one hand keypoint in the hand image; a second determination unit configured to determine a fingernail image in the hand image based on the at least one hand key point; a third determining unit configured to determine nail related information based on the nail image, wherein the related information includes a nail key point and a nail region.
In some optional implementations of some embodiments, the nail related information further includes: nail verification information. And the above method further comprises a discarding unit. Wherein the discarding unit is configured to: and discarding the nail key point and the nail region in response to the nail verification information not meeting a preset condition.
In some optional implementations of some embodiments, the second determining unit 502 may be further configured to: determining the nail image and the category of the nail image in the hand image based on the at least one hand key point.
In some optional implementations of some embodiments, the apparatus 500 further comprises: an input unit configured to input the target image into a hand detection model trained in advance, to obtain position information of the hand image; and a cropping unit configured to crop the target image based on the position information to obtain the hand image.
In some optional implementations of some embodiments, the first determining unit 501 may be further configured to: and inputting the hand image into a pre-trained hand key point detection model to generate the at least one hand key point.
In some optional implementations of some embodiments, the second determining unit 502 may be further configured to: selecting a predetermined number of hand key points from the at least one hand key point as finger key points; and cutting the hand image based on the finger key points to generate the nail image.
In some optional implementations of some embodiments, the third determining unit 503 may be further configured to: and inputting the nail image into a pre-trained nail key point detection and segmentation model to generate the fingernail region and the fingernail key point.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device (e.g., the electronic device of FIG. 1) 600 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 605 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining at least one hand keypoint in the hand image; determining a nail image in the hand image based on the at least one hand key point; and determining nail related information based on the nail image, wherein the related information comprises nail key points and nail regions.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first determination unit, a second determination unit, and a third determination unit. Where the names of the cells do not in some cases constitute a limitation of the cell itself, for example, the first determination unit may also be described as "determining at least one hand keypoint in the hand image". The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
According to one or more embodiments of the present disclosure, there is provided an image processing method including: determining at least one hand keypoint in the hand image; determining a nail image in the hand image based on the at least one hand key point; and determining nail related information based on the nail image, wherein the related information comprises nail key points and nail regions.
According to one or more embodiments of the present disclosure, the above nail related information further includes: nail verification information; and the above method further comprises: and discarding the nail key point and the nail region in response to the nail verification information not meeting a preset condition.
According to one or more embodiments of the present disclosure, the determining a nail image in the hand image based on the at least one hand keypoint comprises: determining the nail image and the category of the nail image in the hand image based on the at least one hand key point.
In accordance with one or more embodiments of the present disclosure, a method further comprises: inputting the target image into a pre-trained hand detection model to obtain position information of the hand image; and cutting the target image based on the position information to obtain the hand image.
In accordance with one or more embodiments of the present disclosure, the determining at least one hand keypoint in the hand image comprises: and inputting the hand image into a pre-trained hand key point detection model to generate the at least one hand key point.
According to one or more embodiments of the present disclosure, the determining a nail image in the hand image based on the at least one hand keypoint comprises: selecting a predetermined number of hand key points from the at least one hand key point as finger key points; and cutting the hand image based on the finger key points to generate the nail image.
According to one or more embodiments of the present disclosure, the determining nail related information based on the nail image includes: and inputting the nail image into a pre-trained nail key point detection and segmentation model to generate the fingernail region and the fingernail key point.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus including: a first determination unit, a second determination unit and a third determination unit. Wherein the first determination unit is configured to determine at least one hand keypoint in the hand image; a second determination unit configured to determine a fingernail image in the hand image based on the at least one hand key point; a third determining unit configured to determine nail related information based on the nail image, wherein the related information includes a nail key point and a nail region.
According to one or more embodiments of the present disclosure, the above nail related information further includes: nail verification information. And the above method further comprises a discarding unit. Wherein the discarding unit is configured to: and discarding the nail key point and the nail region in response to the nail verification information not meeting a preset condition.
According to one or more embodiments of the present disclosure, the second determining unit may be further configured to: determining the nail image and the category of the nail image in the hand image based on the at least one hand key point.
According to one or more embodiments of the present disclosure, an apparatus further comprises: an input unit configured to input the target image into a hand detection model trained in advance, to obtain position information of the hand image; and a cropping unit configured to crop the target image based on the position information to obtain the hand image.
According to one or more embodiments of the present disclosure, the first determining unit may be further configured to: and inputting the hand image into a pre-trained hand key point detection model to generate the at least one hand key point.
According to one or more embodiments of the present disclosure, the second determining unit may be further configured to: selecting a predetermined number of hand key points from the at least one hand key point as finger key points; and cutting the hand image based on the finger key points to generate the nail image.
According to one or more embodiments of the present disclosure, the third determining unit may be further configured to: and inputting the nail image into a pre-trained nail key point detection and segmentation model to generate the fingernail region and the fingernail key point.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as described in any of the embodiments above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the embodiments above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A method of image processing, comprising:
determining at least one hand keypoint in the hand image;
determining a nail image in the hand image based on the at least one hand keypoint;
determining nail related information based on the nail image, wherein the related information includes nail key points and nail regions.
2. The method of claim 1, wherein the nail-related information further comprises: nail verification information; and
the method further comprises the following steps:
and in response to the nail verification information not meeting a preset condition, discarding the nail key point and the nail region.
3. The method of claim 1, wherein said determining a nail image in the hand image based on the at least one hand keypoint comprises:
determining the nail image and the category of the nail image in the hand image based on the at least one hand keypoint.
4. The method of claim 1, wherein prior to determining at least one hand keypoint in a hand image, further comprising: inputting the target image into a pre-trained hand detection model to obtain position information of the hand image;
and cutting the target image based on the position information to obtain the hand image.
5. The method of claim 1, wherein said determining at least one hand keypoint in a hand image comprises:
and inputting the hand image into a pre-trained hand key point detection model to generate the at least one hand key point.
6. The method of claim 1, wherein said determining a nail image in the hand image based on the at least one hand keypoint comprises:
selecting a predetermined number of hand keypoints from the at least one hand keypoint as finger keypoints;
and cutting the hand image based on the finger key points to generate the nail image.
7. The method of claim 1, wherein said determining nail-related information based on said nail image comprises:
and inputting the nail image into a pre-trained nail key point detection and segmentation model to generate the fingernail region and the fingernail key point.
8. An apparatus for generating information related to a fingernail, comprising:
a first determination unit configured to determine at least one hand keypoint in a hand image;
a second determination unit configured to determine a fingernail image in the hand image based on the at least one hand key point;
a third determination unit configured to determine nail related information based on the nail image, wherein the related information includes a nail key point and a nail region.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202011058037.1A 2020-09-30 2020-09-30 Image processing method, apparatus, device and medium Pending CN112183388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011058037.1A CN112183388A (en) 2020-09-30 2020-09-30 Image processing method, apparatus, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011058037.1A CN112183388A (en) 2020-09-30 2020-09-30 Image processing method, apparatus, device and medium

Publications (1)

Publication Number Publication Date
CN112183388A true CN112183388A (en) 2021-01-05

Family

ID=73945528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011058037.1A Pending CN112183388A (en) 2020-09-30 2020-09-30 Image processing method, apparatus, device and medium

Country Status (1)

Country Link
CN (1) CN112183388A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486761A (en) * 2021-06-30 2021-10-08 北京市商汤科技开发有限公司 Nail identification method, device, equipment and storage medium
WO2022267402A1 (en) * 2021-06-25 2022-12-29 北京市商汤科技开发有限公司 Image processing method and apparatus, device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651879A (en) * 2016-12-23 2017-05-10 深圳市拟合科技有限公司 Method and system for extracting nail image
CN108227912A (en) * 2017-11-30 2018-06-29 北京市商汤科技开发有限公司 Apparatus control method and device, electronic equipment, computer storage media
CN108230383A (en) * 2017-03-29 2018-06-29 北京市商汤科技开发有限公司 Hand three-dimensional data determines method, apparatus and electronic equipment
WO2020029466A1 (en) * 2018-08-07 2020-02-13 北京字节跳动网络技术有限公司 Image processing method and apparatus
CN111047526A (en) * 2019-11-22 2020-04-21 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN111382745A (en) * 2018-12-30 2020-07-07 深圳市邻友通科技发展有限公司 Nail image segmentation method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651879A (en) * 2016-12-23 2017-05-10 深圳市拟合科技有限公司 Method and system for extracting nail image
CN108230383A (en) * 2017-03-29 2018-06-29 北京市商汤科技开发有限公司 Hand three-dimensional data determines method, apparatus and electronic equipment
CN108227912A (en) * 2017-11-30 2018-06-29 北京市商汤科技开发有限公司 Apparatus control method and device, electronic equipment, computer storage media
WO2020029466A1 (en) * 2018-08-07 2020-02-13 北京字节跳动网络技术有限公司 Image processing method and apparatus
CN111382745A (en) * 2018-12-30 2020-07-07 深圳市邻友通科技发展有限公司 Nail image segmentation method, device, equipment and storage medium
CN111047526A (en) * 2019-11-22 2020-04-21 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267402A1 (en) * 2021-06-25 2022-12-29 北京市商汤科技开发有限公司 Image processing method and apparatus, device, and storage medium
CN113486761A (en) * 2021-06-30 2021-10-08 北京市商汤科技开发有限公司 Nail identification method, device, equipment and storage medium
WO2023273227A1 (en) * 2021-06-30 2023-01-05 北京市商汤科技开发有限公司 Fingernail recognition method and apparatus, device, and storage medium

Similar Documents

Publication Publication Date Title
CN110298413B (en) Image feature extraction method and device, storage medium and electronic equipment
CN109829432B (en) Method and apparatus for generating information
CN110532981B (en) Human body key point extraction method and device, readable storage medium and equipment
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN110516678B (en) Image processing method and device
CN108510084B (en) Method and apparatus for generating information
CN111461967B (en) Picture processing method, device, equipment and computer readable medium
CN112183388A (en) Image processing method, apparatus, device and medium
CN112257582A (en) Foot posture determination method, device, equipment and computer readable medium
CN112200183A (en) Image processing method, device, equipment and computer readable medium
CN111461965B (en) Picture processing method and device, electronic equipment and computer readable medium
CN112907628A (en) Video target tracking method and device, storage medium and electronic equipment
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN111756953A (en) Video processing method, device, equipment and computer readable medium
CN111586295B (en) Image generation method and device and electronic equipment
CN113220922B (en) Image searching method and device and electronic equipment
CN111461969B (en) Method, device, electronic equipment and computer readable medium for processing picture
CN112233207A (en) Image processing method, device, equipment and computer readable medium
CN113177176A (en) Feature construction method, content display method and related device
CN112418233A (en) Image processing method, image processing device, readable medium and electronic equipment
CN112070034A (en) Image recognition method and device, electronic equipment and computer readable medium
CN112488204A (en) Training sample generation method, image segmentation method, device, equipment and medium
CN111835917A (en) Method, device and equipment for showing activity range and computer readable medium
CN111770385A (en) Card display method and device, electronic equipment and medium
CN111914861A (en) Target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information