WO2022134703A1 - 毛囊点识别方法、系统、装置和存储介质 - Google Patents

毛囊点识别方法、系统、装置和存储介质 Download PDF

Info

Publication number
WO2022134703A1
WO2022134703A1 PCT/CN2021/120636 CN2021120636W WO2022134703A1 WO 2022134703 A1 WO2022134703 A1 WO 2022134703A1 CN 2021120636 W CN2021120636 W CN 2021120636W WO 2022134703 A1 WO2022134703 A1 WO 2022134703A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
hair follicle
hair
extracted
follicle
Prior art date
Application number
PCT/CN2021/120636
Other languages
English (en)
French (fr)
Inventor
黄弯弯
杨溪
吕文尔
Original Assignee
上海微创卜算子医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海微创卜算子医疗科技有限公司 filed Critical 上海微创卜算子医疗科技有限公司
Publication of WO2022134703A1 publication Critical patent/WO2022134703A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts

Definitions

  • the present application relates to the field of feature identification, and more particularly to methods, systems, devices, and storage media for hair follicle point identification.
  • Hair transplant technology is one of the commonly used methods for dealing with hair loss. It was proposed by Norman Orentreich in 1959. The main method is to transfer the hair from the thick and tough part of the hair loss person (for example, the back of the head) through the hair follicle and transplant it into the hair follicle. Where there is thinning hair.
  • the traditional hair transplant operation is generally done manually by the hair transplant operator, which mainly includes manual anesthesia of the scalp of the alopecia by the hair transplant operator, and then manually identifying each hair follicle and manually extracting it.
  • This manual hair transplant technique is highly dependent on the experience and technology of the hair transplant operator, and has problems such as time-consuming, laborious, and low success rate.
  • the present application provides a method, system, device and storage medium for identifying hair follicle points, which can completely automatically complete the identification of hair follicle points, thereby helping to improve the efficiency of hair transplantation and reducing the burden on operators.
  • the present application provides a method for identifying hair follicle points, including: acquiring a first image, the first image including a first image portion corresponding to an area to be extracted from the hair follicle; The first coordinate position of each hair follicle to be extracted in the hair follicle strip extraction area in the first image coordinate system; and path planning is performed based on each of the first coordinate positions to determine a method for extracting each hair follicle to be extracted. Extract path.
  • the method further includes determining the actual extraction position of each hair follicle to be extracted according to the extraction path.
  • determining the first coordinate position of each hair follicle to be extracted in the hair follicle to-be-extracted region in the first image coordinate system includes: performing the first image portion on the first image portion. dividing to obtain a plurality of first hair regions; and determining the first coordinate position of each hair follicle to be extracted based on the first hair regions.
  • determining the first coordinate position of each hair follicle to be extracted based on the first hair region includes: determining each to-be-extracted hair follicle by using the smallest circumscribed rectangle of each first hair region and the growth direction of the hair The first coordinate position of the hair follicle is extracted.
  • segmenting the first image portion to obtain a plurality of first hair regions includes: performing a binarization process on the first image portion to obtain the plurality of first hair regions.
  • the method further includes: performing connected region analysis and morphological processing on the plurality of first hair regions.
  • determining the actual extraction position of each hair follicle to be extracted according to the extraction path includes: based on the first coordinate position, obtaining the coordinates of the first hair follicle to be extracted in the extraction path on the image acquisition device an estimated extraction location in the system; instructing the robotic arm to move to the estimated extraction location and to acquire a second image, wherein the second image includes a portion of the second image corresponding to the area of the hair follicle to be extracted; and based on the In the second image part, the actual extraction position of the first hair follicle to be extracted is determined.
  • determining the actual extraction position of the first hair follicle to be extracted includes: segmenting the second image portion to obtain a plurality of second hair regions; Determine the second coordinate position of each hair follicle to be extracted in the second image coordinate system based on the second hair region; determining the position of the needle needle point in the second image; determining the position closest to the needle needle point position among all the second coordinate positions; and converting the closest position to the coordinate system to obtain the current needle position The actual extraction position of the hair follicle to be extracted.
  • the present application provides a device for identifying hair follicle points, comprising: a memory storing a machine-executable program; and a processor, when executing the machine-executable program, the processor The hair follicle point identification method of any one of claims 1-8.
  • the present application provides a hair follicle point identification system, comprising: a robotic arm on which a hair-taking needle is mounted; and an image acquisition device, wherein the image acquisition device is installed to follow the robotic arm Synchronized movement; a control device, the control device is connected in communication with the robotic arm and the image acquisition device, and the control device is configured to interact with the robotic arm and the image acquisition device to achieve information according to the The hair follicle point identification method described in the first aspect of the present application.
  • the present application provides a computer-readable storage medium storing a computer program, which, when executed by a processor, implements the method for identifying hair follicle points according to the first aspect of the present application.
  • FIG. 1 shows a flowchart of a method for identifying hair follicle points according to an embodiment of the present application
  • FIG. 2 shows a flowchart of an example implementation for determining, based on the first image portion, the first coordinate position of each hair follicle to be extracted in the to-be-extracted hair follicle region in the first image coordinate system according to an embodiment of the present application ;
  • FIG. 3 shows a flowchart of an implementation for determining the actual extraction position of each hair follicle to be extracted according to an extraction path according to an embodiment of the present application
  • FIG. 4 shows a flowchart of an implementation for determining the actual extraction position of the hair follicle to be extracted currently based on the second image portion according to an embodiment of the present application
  • Fig. 5 shows the example schematic diagram of the first image obtained in the hair follicle point identification method shown in Fig. 1;
  • FIG. 6 shows an example schematic diagram of the first image portion
  • FIG. 7 shows an example schematic diagram of a binarized image obtained after the first image part is subjected to binarization processing
  • Fig. 10A shows a schematic diagram of the second image part after the actual extraction position of each hair follicle point to be extracted is determined
  • Fig. 10B shows an enlarged schematic view of the part indicated by A in Fig. 10A;
  • FIG. 11 shows a schematic structural block diagram of a device for identifying hair follicle points according to an embodiment of the present application
  • FIG. 12 shows a schematic structural block diagram of a hair follicle point identification system according to an embodiment of the present application.
  • the present application provides a method for identifying hair follicle points.
  • the hair follicle point identification method may include the following steps:
  • Step 101 acquiring a first image, where the first image includes a first image portion corresponding to the region to be extracted from the hair follicle.
  • a suitable image acquisition device can be selected first, and parameters of the selected image acquisition device can be set.
  • the parameters that can be set include, for example, exposure time, resolution, and the like.
  • a depth camera can be selected as the image acquisition device, the exposure time is set to 7190 seconds, and the resolution is set to 2560*1600.
  • the first image can be acquired by performing partial visual imaging of the body part containing the region to be extracted with hair follicles (for example, the surface of the hair follicle to be extracted on the head of the alopecia) by using the selected image acquisition device, so this
  • the first image includes a first image portion corresponding to the region to be extracted from the hair follicle (for example, the image portion included in the dotted box in FIG. 5 and the image portion shown in FIG. 6 ).
  • the hair follicle to-be-extracted area refers to, for example, the area on the surface of the hair follicle to be extracted on the head of the alopecia for extraction of the hair follicle, such as the back of the head of the alopecia, where the hair is relatively tough and thick.
  • the first image includes the portion of the first image corresponding to the area to be extracted of the hair follicles in order to determine the position of each of the hair follicles to be extracted in the area to be extracted of the hair follicles.
  • the hair removal needle for extracting the hair follicle points to be extracted can be installed on the robotic arm, and the image acquisition device is installed to move synchronously with the robotic arm, so no matter how the robotic arm moves, the hair removal needle and The positional relationship between the robotic arms is relatively fixed, and the positional relationship between the robotic arm and the image acquisition device is also relatively fixed, so the positional relationship between the hair-taking needle and the image acquisition device is also relatively fixed.
  • the three-dimensional position and orientation of the hair removal needle in the coordinate system of the image acquisition device can be calculated.
  • the direction of the hair removal needle may refer to the specific orientation of the needle of the hair removal needle, for example, the specific direction of the front, rear, left or right of the needle of the hair removal needle.
  • the coordinate system of the image capture device and the image coordinate system of each image captured by the image capture device The conversion relationship between them is fixed. Based on this conversion relationship, the coordinate position in the image coordinate system can be converted into a three-dimensional position in the image acquisition device coordinate system, and the three-dimensional position in the image acquisition device coordinate system can also be converted into a coordinate position in the image coordinate system.
  • step 102 based on the first image portion, a first coordinate position of each to-be-extracted hair follicle in the to-be-extracted hair follicle region in the first image coordinate system is determined.
  • a first image portion corresponding to the to-be-extracted area of the hair follicle needs to be extracted from the first image.
  • a coordinate system (this coordinate system is referred to as the first image coordinate system in this document) may be established for the collected first image, so as to represent the relationship between each pixel of the first image in the first image. Location.
  • the first image coordinate system is established by setting the downward direction of the first image along the height from the origin to the positive Y direction.
  • the position of the first image portion in the first image can be identified, for example, based on the density and distribution of the hair color pixels contained in the first image, so as to extract information from the first image.
  • the first image portion is extracted.
  • the first coordinate position of each hair follicle to be extracted in the first image coordinate system can be determined.
  • FIG. 2 shows the determination of the first coordinate position of each hair follicle to be extracted in the first image coordinate system in the hair follicle strip extraction region based on the first image part, which is implemented in step 102 according to an embodiment of the present application.
  • the first image portion is segmented to obtain a plurality of first hair regions.
  • the plurality of first hair regions may be obtained by binarizing the first image portion.
  • a global binarization method or a local binarization method may be used to perform binarization processing on the first image portion.
  • a first image portion may for example be gray-scaled, converted into a grayscale image, and then pixels with a grayscale greater than a preset threshold are set to black , and set the pixels whose gray level is less than the preset threshold as white, and invert the obtained image to obtain the corresponding binarized image.
  • the first image part can also be converted into a grayscale image, and then the neighborhood information of each pixel (for example, the neighborhood size can be set to 31*31) and offset value (for example, the offset value can be set to 25) to calculate the threshold value for each pixel (for example, the threshold value is calculated by taking the average method) to set the corresponding pixel as white based on the threshold value or black, and then invert the resulting image to obtain the corresponding binarized image.
  • FIG. 7 shows an example of a binarized image obtained after the first image portion is subjected to the binarization process as described above.
  • the segmented first hair region is processed to perform connected region analysis and morphological processing.
  • the area of each of the first hair areas can be calculated first, and the area of each first hair area is compared with a preset area threshold.
  • the first hair region is deleted (for example, it is converted to a color representing a non-hair region, for example, converted to black in FIG. 7 ).
  • an expansion operation may be performed on the segmented first hair regions, and then an erosion operation may be performed to remove noise points in each first hair region.
  • the purpose of the dilation operation is to "enlarge" the extent of each first hair region (even if the boundary of the region expands to the outside) in order to incorporate the small particle noise points contained in the region into in this area.
  • the first hair area is white
  • these noise points are small black particles in the white hair area
  • the purpose of the expansion operation is to modify the noise points appearing in the first hair area to white.
  • the erosion operation can make the extent of the first hair area "smaller” (even if the boundary of the area shrinks), and the purpose is to restore the extent of the first hair area enlarged by the expansion operation to original size.
  • step 202 a first coordinate position of each hair follicle to be extracted is determined based on the first hair region.
  • the minimum circumscribed rectangle of each first hair region and the growth direction of the hair can be used to determine the first coordinate position of each hair follicle to be extracted (that is, the position of each hair follicle to be extracted in the first image coordinate system the coordinate position in ). For example, a minimum enclosing rectangle can be drawn for each first hair region, the first coordinate position of the middle point of the shortest side of each enclosing rectangle is extracted, and then the first coordinate position of the shortest side of each enclosing rectangle can be drawn according to the growth direction of the hair. One of the intermediate points is selected as the first coordinate position of the corresponding hair follicle point.
  • FIG. 8 shows an example of the first coordinate position of each hair follicle to be extracted determined in the above manner, wherein the darker black dot at the root of each hair indicates that each hair follicle to be extracted is in the first image part , such as the hair follicle to be extracted as indicated by the marker F.
  • step 103 path planning is performed based on each first coordinate position to determine an extraction path for extracting each hair follicle to be extracted.
  • such path planning may be implemented using algorithms such as the shortest path algorithm or the snake sorting algorithm.
  • the serpentine sorting algorithm as an example, during path planning, the first image portion can be divided into multiple equal parts (for example, five equal parts) along the Y-axis of the image coordinate system. Sort from small to large to get the minimum and maximum values, and then divide them equally based on the maximum and minimum values. Then, the x-coordinate values of each aliquot are sorted, for example, the x-coordinate values of the hair follicle points in each aliquot are sorted alternately from small to large and from large to small, so as to realize path planning.
  • FIG. 9 shows an example schematic diagram of path planning for each first coordinate position using the serpentine sorting algorithm.
  • the four horizontal penetrating lines indicate that the first image part is divided into five equal parts from top to bottom when using the serpentine sorting algorithm for path planning.
  • the hair follicle points are sorted in order from small to large.
  • the hair follicle points are sorted from large to small according to the x-coordinate value.
  • step 104 according to the extraction path determined in step 103, the actual extraction position of each hair follicle to be extracted is determined.
  • the actual extraction position of the first hair follicle to be extracted needs to be determined according to the extraction path, and after the actual extraction position of the first hair follicle to be extracted is determined, the actual extraction position of the second hair follicle to be extracted is determined, and And so on, until the actual extraction positions of all the hair follicles to be extracted are determined.
  • determining the actual extraction position of each hair follicle to be extracted may specifically include:
  • step 301 based on the first coordinate position determined in step 102, the estimated extraction position of the first hair follicle to be extracted in the extraction path in the coordinate system of the image acquisition device is obtained.
  • the estimated extraction positions of all the hair follicles to be extracted in the coordinate system of the image acquisition device can be obtained by converting the first coordinate position determined in step 102 from the first image coordinate system to the coordinate system of the image acquisition device, and then The estimated extraction location of the first hair follicle to be extracted in the extraction path in the coordinate system of the image acquisition device is selected from these estimated extraction locations.
  • the obtained estimated extraction position can be sent to the robotic arm equipped with the hair extraction needle, so that the robotic arm can move to the estimated extraction position of the first hair follicle to be extracted. Extract location.
  • step 302 the robotic arm is instructed to move to the estimated extraction position of the hair follicle to be extracted currently, and a second image is collected, wherein the second image includes a second image portion corresponding to the region of the hair follicle to be extracted.
  • a hair removal needle is mounted on the robotic arm, and the image capture device is mounted to move with the robotic arm. It is worth noting that since the image acquisition device is installed to move with the robotic arm, the positional relationship between the image acquisition device and the robotic arm is actually fixed, so that the image acquisition device and the hair removal needle mounted on the robotic arm are actually fixed. The positional relationship between them is also fixed.
  • the position where the image capturing device captures the image actually also changes relative to the position where it captures the first image. It is worth noting that in this paper, if the robot arm is currently moving to the estimated extraction position of the first hair follicle to be extracted in the extraction path, the current hair follicle to be extracted refers to the first hair follicle to be extracted. If The robot arm currently moves to the estimated extraction position of the second hair follicle to be extracted in the extraction path, then the current hair follicle to be extracted refers to the second hair follicle to be extracted, and so on.
  • the acquisition of the second image can be implemented in a manner similar to that described for step 101, so for the sake of brevity, it will not be repeated here.
  • step 303 based on the second image portion, the actual extraction position of the hair follicle to be extracted currently is determined.
  • the second image since the image acquisition device acquires the second image after the robotic arm moves the hair taking needle to the estimated extraction position of the hair follicle to be extracted currently, the second image is not the same as the first image. In fact, the second image is collected at a position closer to the hair follicle currently to be extracted than the first image, so it can be understood that the position determined based on the second image part included in the second image The location determined by the first image portion included in an image should be more precise. Step 303 will be described in further detail below with reference to FIG. 4 .
  • the actual extraction position of the current hair follicle to be extracted After the actual extraction position of the current hair follicle to be extracted is determined, the actual extraction position can be sent to the robotic arm, so that the robotic arm can move to the actual extraction position, so that the hair extraction needle can accurately extract the current hair follicle to be extracted.
  • step 304 based on the extraction path and the actual extraction position of the currently extracted hair follicle point, the estimated extraction position of the next hair follicle to be extracted is determined, and the next hair follicle to be extracted is used as the current hair follicle to be extracted, and steps 302-303 are repeated until it is determined The actual extraction location of each hair follicle to be extracted.
  • the actual extraction position of the first hair follicle to be extracted is determined, and the robotic arm moves to the actual extraction position of the first hair follicle to be extracted
  • the actual extraction position and extraction path of the first hair follicle to be extracted may be used.
  • the actual extraction location of the follicle point For example, FIG. 10A shows a schematic diagram of the second image part after the actual extraction position of each hair follicle point to be extracted is determined, and FIG.
  • FIG. 10B shows an enlarged schematic diagram of the partial part indicated by A in FIG. 10A .
  • point 1001 is the estimated extraction position based on the hair follicle in the partial part A
  • point 1002 is the actual extraction position obtained based on the second image part, obviously the actual extraction position reflects the actual hair follicle position more accurately than the estimated extraction position .
  • determining the actual extraction position of the hair follicle to be extracted currently may include:
  • step 401 the second image portion is segmented to obtain a plurality of second hair regions.
  • This step can be implemented in a manner similar to step 201, so for the sake of brevity, it will not be repeated here.
  • step 402 a second coordinate position of each hair follicle to be extracted in the second image coordinate system is determined based on the second hair region.
  • This step can be implemented in a manner similar to step 202, so for the sake of brevity, it will not be repeated here.
  • step 403 based on the position and orientation of the hair-retrieving pin in the coordinate system of the image acquisition device, the position of the needle-lowering point of the hair-retrieving pin in the second image is calculated.
  • step 404 the position closest to the needle lowering point position among all the second coordinate positions is determined.
  • the Euclidean distance algorithm also called Euclidean distance algorithm
  • the position closest to the needle lowering point position among all the second coordinate positions is determined.
  • the Euclidean distance algorithm also called Euclidean distance algorithm
  • the actual extraction position of the hair follicle to be extracted currently is obtained by performing coordinate system transformation on the closest position. Specifically, in this step, the actual extraction position of the currently to-be-extracted hair follicle can be obtained by converting the closest position from the position in the second image coordinate system to the corresponding position in the image acquisition device coordinate system. After the actual extraction position of the hair follicle point to be extracted currently is obtained, the actual extraction position can be sent to the robotic arm installed with the hair-taking needle, so that the hair-taking needle can be moved to the actual extraction position by the robotic arm, thereby facilitating The hair extraction is for the extraction of the hair follicle points to be extracted currently.
  • the present application can completely automatically complete the identification and extraction of hair follicle points, thereby improving the efficiency of hair transplantation and reducing the requirements for the operator's skills and experience.
  • the present application further provides a hair follicle point identification device.
  • the hair follicle point identification device includes a memory 1101 and a processor 1102 , and a machine executable program is stored in the memory 1101 .
  • the processor 1102 executes the machine-executable program, it implements the hair follicle point identification method described in the above embodiments.
  • the number of the memory 1101 and the processor 1102 may be one or more.
  • the follicle point identification means may be implemented using electronic devices intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blades servers, mainframe computers, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the hair follicle point identification device may further include a communication interface 1103, which is used to communicate (wired or wireless) with external devices (eg, the robotic arm 1102 and the image acquisition device 1104 shown in FIG. Data interaction.
  • a communication interface 1103 which is used to communicate (wired or wireless) with external devices (eg, the robotic arm 1102 and the image acquisition device 1104 shown in FIG. Data interaction.
  • the memory 1101 may include nonvolatile memory and volatile memory.
  • Non-volatile memory may include, for example, read-only memory (ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include, for example, random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).
  • the memory 1101, the processor 1102, and the communication interface 1103 can be connected to each other through a bus and implement mutual communication.
  • the bus may be an Industry Standard Architecture (Industry Standard Architecture, ISA) bus, a Peripheral Component (PCI) bus, or an Extended Industry Standard Architecture (Extended Industry Standard Component, EISA) bus, and the like.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 11, but it does not mean that there is only one bus or one type of bus.
  • the present application further provides a hair follicle point identification system, such as the hair follicle point identification system 1200 shown in FIG. 12 .
  • the system 1200 may include a control device 1201 , a robotic arm 1202 and an image acquisition device 1204 .
  • the hair taking needle 1203 is installed on the mechanical arm 1202, so the mechanical arm 1202 can drive the hair taking needle to move.
  • the image capturing device 1204 is installed to move synchronously with the robotic arm 1202, so the positional relationship between the image capturing device 1204 and the robotic arm is fixed, and the positional relationship between the image capturing device and the hair-retrieving needle is also fixed.
  • the control device 1204 can be used to realize the hair follicle point identification device as shown in FIG. 11, which is communicatively connected (wired connection or wireless connection) with the robotic arm 1202 and the image acquisition device 1204, and the control device 1204 is configured to communicate with the robotic arm 1202 and the image acquisition device 1204.
  • the collection device 1204 performs information exchange to realize the hair follicle point identification method described in the above embodiment.
  • the control means 1204 may be implemented by electronic equipment intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blades Servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种毛囊点识别方法、系统、装置和存储介质,该毛囊点识别方法包括:获取第一图像,第一图像包括对应于毛囊待提取区域的第一图像部分(101);基于第一图像部分,确定毛囊待提取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置(102);基于每一第一坐标位置进行路径规划,以确定用于提取每一待提取毛囊的提取路径(103)。通过该方法,可以完全自动地完成对毛囊点的识别,从而有助于提高植发的效率,并降低对操作人员的技术和经验的要求。

Description

毛囊点识别方法、系统、装置和存储介质
本申请要求于2020年12月25日提交中国专利局,申请号为2020115631187,申请名称为“毛囊点识别方法、系统、装置和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及特征识别领域,并且更具体地涉及毛囊点识别方法、系统、装置和存储介质。
背景技术
随着社会节奏的加快以及生活压力的增加,脱发人群越来越多,并且越来越呈年轻化的趋势。根据世卫组织统计,当今,平均每5个人之中,就有1个人存在脱发问题,而在中国,脱发人群更是达到了2.5亿之多。
植发技术是目前常用的用于处理脱发的方法之一,其由Norman Orentreich于1959年提出,主要方法是将脱发者头发茂密、坚韧处(例如,后脑勺部位)的头发,通过毛囊搬运,移植到头发稀少的地方。目前,传统的植发操作一般由植发操作人员手动完成,主要包括由植发操作人员对脱发者的头皮进行手动麻醉,然后人工识别每一根毛囊并对其进行手动提取。这种手动植发技术对于植发操作者的经验与技术依赖性较大,而且存在耗时费力、成功率低等问题。
因此,有必要提供一种能够自动识别毛囊点的技术,使得能够有助于提高毛囊移植的效率。
发明内容
鉴于以上技术问题,本申请提供了一种毛囊点识别方法、系统、装置和存储介质,其可以完全自动地完成对毛囊点的识别,从而有助于提高植发的效率,并降低对操作人员的技术和经验的要求。
在一方面,本申请提供了一种毛囊点识别方法,包括:获取第一图像,所述第一图像包括对应于毛囊待提取区域的第一图像部分;基于所述第一图像部分,确定所述毛囊带提 取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置;以及基于每一所述第一坐标位置进行路径规划,以确定用于提取每一待提取毛囊的提取路径。
在一种实现中,该方法还包括按照所述提取路径,确定每一待提取毛囊的实际提取位置。
在一种实现中,基于所述第一图像部分,确定所述毛囊待提取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置包括:对所述第一图像部分进行分割,以获得多个第一毛发区域;以及基于所述第一毛发区域确定每一待提取毛囊的所述第一坐标位置。
在一种实现中,基于所述第一毛发区域确定每一待提取毛囊的所述第一坐标位置包括:利用每一第一毛发区域的最小外接矩形框以及毛发的生长方向来确定每一待提取毛囊的所述第一坐标位置。
在一种实现中,对所述第一图像部分进行分割,以获得多个第一毛发区域包括:对所述第一图像部分进行二值化处理,以获得所述多个第一毛发区域。
在一种实现中,所述方法在获得所述多个第一毛发区域之后,还包括:对所述多个第一毛发区域进行连通区域分析和形态学处理。
在一种实现中,按照所述提取路径,确定每一待提取毛囊的实际提取位置包括:基于所述第一坐标位置,获得所述提取路径中的第一个待提取毛囊在图像采集装置坐标系中的估计提取位置;指令机械臂移动到所述估计提取位置,并采集第二图像,其中,所述第二图像包括对应于所述毛囊待提取区域的第二图像部分;以及基于所述第二图像部分,确定所述第一个待提取毛囊的所述实际提取位置。
在一种实现中,基于所述第二图像部分,确定所述第一个待提取毛囊的所述实际提取位置包括:对所述第二图像部分进行分割,以获得多个第二毛发区域;基于所述第二毛发区域确定每一待提取毛囊在第二图像坐标系中的第二坐标位置;基于取发针在图像采集装置坐标系中的位置和方向,计算所述取发针在所述第二图像中的下针点位置;确定所有所述第二坐标位置中与所述下针点位置最接近的位置;以及通过对所述最接近的位置进行坐标系转换,获得所述当前待提取毛囊的所述实际提取位置。
在另一方面,本申请提供了一种毛囊点识别装置,包括:存储器,所述存储器存储有机器可执行程序;以及处理器,所述处理器在执行所述机器可执行程序时,实现根据权利要求1-8中的任一项所述的毛囊点识别方法。
在又一方面,本申请提供了一种毛囊点识别系统,包括:机械臂,所述机械臂上安装有取发针;图像采集装置,其中所述图像采集装置被安装成跟随所述机械臂同步移动;控 制装置,所述控制装置与所述机械臂和所述图像采集装置通信连接,并且所述控制装置被配置成与所述机械臂和所述图像采集装置进行信息交互,以实现根据本申请的第一方面所述的毛囊点识别方法。
在又一方面,本申请提供了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现根据本申请的第一方面所述的毛囊点识别方法。
附图说明
图1示出了根据本申请的实施例的毛囊点识别方法的流程图;
图2示出了根据本申请的实施例的用于基于第一图像部分,确定毛囊待提取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置的示例实现的流程图;
图3示出了根据本申请的实施例的用于按照提取路径,确定每一待提取毛囊的实际提取位置的一种实现的流程图;
图4示出了根据本申请的实施例的用于基于第二图像部分,确定当前待提取毛囊的实际提取位置的一种实现的流程图;
图5示出了在图1所示的毛囊点识别方法中获取的第一图像的示例示意图;
图6示出了第一图像部分的示例示意图;
图7示出了对第一图像部分进行了二值化处理之后的得到的二值化图像的示例示意图;
图8示出了所确定的每一待提取毛囊的第一坐标位置的示例示意图;
图9示出了利用蛇形排序算法对每一第一坐标位置进行路径规划的示例示意图;
图10A示出了第二图像部分在确定了每一个待提取毛囊点的实际提取位置之后的示意图;
图10B示出了图10A中A所指示的局部部分的放大示意图;
图11示出了根据本申请的实施例的毛囊点识别装置的示意性结构框图;
图12示出了根据本申请的实施例的毛囊点识别系统的示意性结构框图。
具体实施方式
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图对本发明的具体实施方式做详细的说明。在下面的描述中阐述了很多具体细节以便于充分理解本发明。但是本发明能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背 本发明内涵的情况下做类似改进,因此本发明不受下面公开的具体实施例的限制。
在本说明书的描述中,术语“第一”、“第二”仅用于对不同的技术特征进行区分,而不能理解为指示或暗示所指示技术特征的相对重要性或顺序。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在一个实施例中,本申请提供了一种毛囊点识别方法。如图1所示,该毛囊点识别方法可包括以下步骤:
步骤101,获取第一图像,该第一图像包括对应于毛囊待提取区域的第一图像部分。
在本实施例中,在通过图像采集装置采集到的第一图像中,头皮与毛发颜色对比度明显。因此,当在步骤101获取第一图像之前,可先选取合适的图像采集装置,并对所选的图像采集装置进行参数设置,可设置的参数包括例如曝光时间、分辨率等。例如,作为一个非限制性示例,可以选取深度相机作为图像采集装置,并将曝光时间设定为7190秒,分辨率设定为2560*1600。
在本实施例中,第一图像可通过利用所选的图像采集装置对包含毛囊待提取区域的身体部位(例如,脱发者头部的待提取毛囊表面)进行局部视觉成像来采集得到,因此该第一图像中包括与毛囊待提取区域相对应的第一图像部分(例如,包括在图5的虚线框中的图像部分和图6所示的图像部分)。
在该实施例中,毛囊待提取区域指例如脱发者头部的待提取毛囊表面上供提取毛囊的区域,例如脱发者的后脑勺部位,此区域毛发较为坚韧、浓密。第一图像包括该对应于毛囊待提取区域的第一图像部分,以便确定毛囊待提取区域中的每一待提取毛囊的位置。
在另一实施例中,用于提取待提取毛囊点的取发针可安装在机械臂上,而图像采集装置被安装成可跟随机械臂同步移动,因此无论机械臂如何移动,取发针和机械臂之间的位置关系是相对固定的,机械臂与图像采集装置之间的位置关系也是相对固定的,因此取发针与图像采集装置之间的位置关系也是相对固定的。鉴于此,基于取发针与图像采集装置之间的位置关系,可以计算出取发针在图像采集装置坐标系中的三维位置和方向。在本文中,取发针的方向可以指取发针的针头的具体朝向,例如取发针的针头朝向前、后、左或右的具体方向。
另外,由于图像采集装置所采集的图像是在图像采集装置的视野中采集到的图像,因此无论图像采集装置如何移动,图像采集装置坐标系与图像采集装置所采集到的各个图像的图像坐标系之间的转换关系都是固定的。基于该转换关系,可以将图像坐标系中的坐标位置转换成图像采集装置坐标系中的三维位置,也可以将图像采集装置坐标系中的三维位 置转换成图像坐标系中的坐标位置。
在步骤102,基于第一图像部分,确定毛囊待提取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置。
在一个实施例中,为了确定每一待提取毛囊在第一图像坐标系中的第一坐标位置,需要先从第一图像中提取出对应于毛囊待提取区域的第一图像部分。具体地,可先为所采集到的第一图像建立一个坐标系(在本文中将该坐标系称为第一图像坐标系),以便于表示第一图像的各个像素点在第一图像中的位置。例如,作为非限制性示例,可通过将所采集到的第一图像的左上角像素设为原点(0,0),将第一图像从原点起沿着宽度向右的方向设为X正方向,将第一图像从原点其沿着高度向下的方向设为Y正方向,来建立第一图像坐标系。在建立了第一图像坐标系之后,可例如基于第一图像中所包含的毛发颜色的像素点的密度和分布来标识出第一图像部分在第一图像中的位置,以从第一图像中提取出该第一图像部分。然后,就可基于该第一图像部分,确定每一待提取毛囊在第一图像坐标系中的第一坐标位置了。
图2中示出了根据本申请的实施例的在步骤102中实现的基于第一图像部分,确定毛囊带提取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置的示例实现的流程图。
在步骤201,对第一图像部分进行分割,以获得多个第一毛发区域。
在一种实现中,可通过对第一图像部分进行二值化处理来获得多个第一毛发区域。在一种实现中,可以采用全局二值化方法或者局部二值化方法来对所述第一图像部分进行二值化处理。仅作为示例,在全局二值化方法中,可以例如首先对第一图像部分进行灰度转换,将第一图像部分转换成灰度图,然后将灰度级大于预设阈值的像素设置为黑色,并将灰度级小于预设阈值的像素设为白色,并将得到的图像取反来获得相应的二值化图像。同样仅作为示例,在局部二值化方法中,可同样先将第一图像部分转换成灰度图,然后以每一像素的邻域信息(例如,可将邻域大小设为31*31)和偏移值(例如,偏移值可设为25)为基础来计算每一像素的阈值(例如,通过取平均值的方法来计算该阈值)以基于该阈值来将相应的像素设为白色或黑色,然后将据此得到的图像取反来获得相应的二值化图像。例如,图7示出了对第一图像部分进行了如上所述的二值化处理之后所得到的二值化图像的示例。
在一种实现中,为了提高后续毛囊点提取的准确性和成功率,在获得了多个第一毛发区域之后,还可对这些第一毛发区域进行进一步的处理,例如可对经二值化处理分割出的 第一毛发区域进行连通区域分析和形态学处理。
对于连通区域分析,可以例如先计算出这些第一毛发区域的中的每一第一毛发区域的面积,将每一第一毛发区域的面积与预设的面积阈值进行比较,当某个第一毛发区域的面积小于该预设的面积阈值时,将该第一毛发区域删除(例如,将其转换为表示非毛发区域的颜色,例如在图7中为转换为黑色)。
对于形态学处理,可以例如对分割出的第一毛发区域先进行膨胀操作,然后再进行腐蚀操作,以删除每一第一毛发区域中的噪音点。具体地,在本申请中,膨胀操作的目的是使各第一毛发区域的范围“变大”(即使该区域的边界向外部扩张),以便将包含在该区域中的小颗粒噪声点合并到该区域中。例如,在第一毛发区为白色的情况下,这些噪声点就是白色毛发区中的小颗粒黑色,膨胀操作的目的就是将出现在第一毛发区中的噪声点也修改为白色。另外,在本申请中,腐蚀操作可使第一毛发区域的范围“变小”(即使该区域的边界收缩),其目的就是为了使因膨胀操作而变大的第一毛发区域的范围恢复成原来的大小。
在步骤202,基于第一毛发区域确定每一待提取毛囊的第一坐标位置。
在一种实现中,可利用每一第一毛发区域的最小外接矩形框以及毛发的生长方向来确定每一待提取毛囊在第一坐标位置(即,每一待提取毛囊在第一图像坐标系中的坐标位置)。例如,可对每一第一毛发区域进行最小外接矩形框绘制,提取各外接矩形框最短边的中间点的第一坐标位置,然后再通过根据毛发的生长方向从每一外接矩形框最短边的中间点中选取一个中间点来作为相应毛囊点的第一坐标位置。例如,仅作为示例,当在第一图像中,毛发是顺势从上往下生长的情况下,以每一外接矩形框最短边的中间点中y值较小的那一个中间点所在的位置作为相应毛囊点的位置。例如,图8示出了通过以上方式确定的每一待提取毛囊的第一坐标位置的示例,其中在每一毛发根部的颜色较深的黑点即表示每一待提取毛囊在第一图像部分中的位置,例如由标记F所指示的待提取毛囊。
返回到图1,在步骤103,基于每一第一坐标位置进行路径规划,以确定用于提取每一待提取毛囊的提取路径。
具体地,可利用例如最短路径算法或蛇形排序算法之类算法来实现这样的路径规划。以蛇形排序算法为例,在路径规划时,可将第一图像部分沿图像坐标系Y轴等分成多个等分(例如,五等分),例如可通过将所有毛囊点的y坐标值进行从小到大排序来获取最小值和最大值,然后再基于该最大值和最小值对其进行等分。然后再对每一等分进行x坐标值进行排序,例如对于各等分中毛囊点的x坐标值依次从小到大和从大到小交替进行排 序,从而实现路径规划。例如,图9示出了利用蛇形排序算法对每一第一坐标位置进行路径规划的示例示意图。在图9中,四条横向贯穿线表示在利用蛇形排序算法进行路径规划时将第一图像部分从上到下等分成了五等分,其中对于最顶部的第一等分,根据x坐标值依次从小到大的顺序对其中的毛囊点进行了排序,对于第一等分之下的第二等分,根据x坐标值依次从大到小的顺序对其中的毛囊点进行了排序,并以此类推,从而获得了图9中的白色连线所指示的提取路径。
在步骤104,按照在步骤103确定的提取路径,确定每一待提取毛囊的实际提取位置。
例如,首先需按照该提取路径确定第一个待提取毛囊的实际提取位置,在确定了第一个待提取毛囊的实际提取位置之后,再确定其中第二个待提取毛囊的实际提取位置,并以此类推,直到确定了所有待提取毛囊的实际提取位置。
在一个实施例中,如图3所示,用于按照在步骤103确定的提取路径,确定每一待提取毛囊的实际提取位置具体可包括:
在步骤301,基于在步骤102确定的第一坐标位置,获得提取路径中的第一个待提取毛囊在图像采集装置坐标系中的估计提取位置。具体地,可通过对在步骤102确定的第一坐标位置进行从第一图像坐标系到图像采集装置坐标系的转换来获得所有待提取毛囊在图像采集装置坐标系中的估计提取位置,然后可以从这些估计提取位置中选择出提取路径中的第一个待提取毛囊在图像采集装置坐标系中的估计提取位置。在获取了第一个待提取毛囊的估计提取位置之后,就可将所获取的估计提取位置发送给安装有取发针的机械臂,使得机械臂能够移动到该第一个待提取毛囊的估计提取位置。
在步骤302,指令机械臂移动到当前待提取毛囊的估计提取位置,并采集第二图像,其中,所述第二图像包括对应于所述毛囊待提取区域的第二图像部分。在一种实现中,机械臂上安装有取发针,并且所述图像采集装置被安装成跟随所述机械臂一起移动。值得注意的是,由于图像采集装置被安装成跟随机械臂一起移动,因此图像采集装置与机械臂之间的位置关系实际上是固定的,从而图像采集装置与安装在机械臂上的取发针之间的位置关系也是固定的。由此,当机械臂将取发针移动到第一个待提取毛囊的估计提取位置之后,图像采集装置采集图像的位置相对于其采集第一图像的位置实际上也发生了变化。值得注意的是,在本文中,如果机械臂当前要移动到的是提取路径中的第一个待提取毛囊的估计提取位置,则当前待提取毛囊指的就是该第一个待提取毛囊,如果机械臂当前移动到的是提取路径中的第二个待提取毛囊的估计提取位置,则当前待提取毛囊指的就是该第二个待提取毛囊,并以此类推。
对第二图像的采集可采用与针对步骤101所描述的类似的方式来实现,因此为了简要起见,这里将不再进行赘述。
在步骤303,基于第二图像部分,确定当前待提取毛囊的实际提取位置。在本申请中,由于图像采集装置是在机械臂将取发针移动到当前待提取毛囊的估计提取位置之后采集第二图像的,因此第二图像与第一图像并不相同。实际上,第二图像是在与第一图像相比更接近当前待提取毛囊的位置采集到的,因此可以理解,基于该第二图像中包括的第二图像部分确定的位置相比于根据第一图像中包括的第一图像部分确定的位置应该更为精确。下面将结合图4对步骤303作更进一步详细的描述。在确定了当前待提取毛囊的实际提取位置之后,就可将该实际提取位置发送给机械臂,使得机械臂能够移动到该实际提取位置,进而使得取发针能够准确地提取当前待提取毛囊。
在步骤304,基于提取路径以及当前提取毛囊点的实际提取位置,确定下一个待提取毛囊的估计提取位置,并将该下一个待提取毛囊作为当前待提取毛囊来重复步骤302-303,直到确定每一待提取毛囊的实际提取位置。
例如,在确定了第一个待提取毛囊的实际提取位置,并且机械臂移动到了该第一个待提取毛囊的实际提取位置之后,可基于该第一个待提取毛囊的实际提取位置以及提取路径确定提取路径中的第二个待提取毛囊的估计提取位置,然后可重复步骤302-303,从而确定第二个待提取毛囊点的实际提取位置,并以此类推,从而可以确定每一个待提取毛囊点的实际提取位置。例如,图10A示出了第二图像部分在确定了每一个待提取毛囊点的实际提取位置之后的示意图,并且图10B示出了图10A中A所指示的局部部分的放大示意图。在图10B中,点1001是基于局部部分A中的毛囊的估计提取位置,点1002是基于第二图像部分获得的实际提取位置,显然实际提取位置比估计提取位置更精确地反映实际的毛囊位置。
参见图4,基于所述第二图像部分,确定当前待提取毛囊的实际提取位置可包括:
在步骤401,对第二图像部分进行分割,以获得多个第二毛发区域。该步骤可采用与步骤201类似的方式来实现,因此为了简要起见,这里将不再进行赘述。
在步骤402,基于第二毛发区域确定每一待提取毛囊在第二图像坐标系中的第二坐标位置。该步骤可采用与步骤202类似的方式来实现,因此为了简要起见,这里也不将进行赘述。
在步骤403,基于取发针在图像采集装置坐标系中的位置和方向,计算取发针在第二图像中的下针点位置。
在步骤404,确定所有第二坐标位置中与所述下针点位置最接近的位置。例如,可通过欧氏距离算法(又称为欧几里得距离算法)来确定所有第二坐标位置中与当前待提取毛囊的估计提取位置的第二坐标位置最接近的位置。
在步骤405,通过对该最接近的位置进行坐标系转换,获得当前待提取毛囊的实际提取位置。具体地,在该步骤中,可通过将所述最接近的位置从第二图像坐标系中的位置转换为图像采集装置坐标系中的相应位置来获得当前待提取毛囊的实际提取位置。在获得了当前待提取毛囊点的实际提取位置之后,可将该实际提取位置发送给安装有取发针的机械臂,以通过该机械臂来将取发针移动到该实际提取位置,从而促成取发针对当前待提取毛囊点的提取。
基于以上方法,本申请能够完全自动地完成对毛囊点的识别与提取,从而提高了植发的效率,并且降低了对操作人员的技术和经验的要求。
根据一实施例,本申请还提供了一种毛囊点识别装置,如图11所示,该毛囊点识别装置包括存储器1101和处理器1102,存储器1101内存储有机器可执行程序。处理器1102在执行该机器可执行程序时,实现上述实施例中描述的毛囊点识别方法。在本申请中,存储器1101和处理器1102的数量可以为一个或多个。在本申请中,该毛囊点识别装置可以采用电子设备来实现,该电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。
该毛囊点识别装置还可包括通信接口1103,该通信接口1103用于与外界设备(例如,如图11所示的机械臂1102和图像采集装置1104)进行通信(有线或无线),以与其进行数据交互。
存储器1101可以包括非易失性存储器和易失性存储器。非易失性存储器可包括例如只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括例如随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
存储器1101、处理器1102和通信接口1103可以通过总线相互连接并实现相互间的通信。该总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设 备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Component,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图11中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
根据一实施例中,本申请还提供了一种毛囊点识别系统,例如图12所示的毛囊点识别系统1200。如图12所示,该系统1200可以包括控制装置1201、机械臂1202以及图像采集装置1204。取发针1203安装在机械臂1202上,因此机械臂1202可以带动取发针进行移动。图像采集装置1204被安装成可跟随机械臂1202同步移动,因此图像采集装置1204与机械臂之间的位置关系是固定的,进而图像采集装置与取发针之间的位置关系也是固定的。控制装置1204可用于实现如图11所示的毛囊点识别装置,其与机械臂1202和图像采集装置1204通信连接(有线连接或无线连接),并且控制装置1204被配置成与机械臂1202和图像采集装置1204进行信息交互,以实现上述实施例中描述的毛囊点识别方法。在本申请中,该控制装置1204可以由电子设备来实现,该电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (13)

  1. 一种毛囊点识别方法,包括:
    获取第一图像,所述第一图像包括对应于毛囊待提取区域的第一图像部分;
    基于所述第一图像部分,确定所述毛囊待提取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置;以及
    基于每一所述第一坐标位置进行路径规划,以确定用于提取每一待提取毛囊的提取路径。
  2. 根据权利要求1所述的毛囊点识别方法,还包括:
    按照所述提取路径,确定每一待提取毛囊的实际提取位置。
  3. 根据权利要求1所述的毛囊点识别方法,其中,基于所述第一图像部分,确定所述毛囊待提取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置包括:
    对所述第一图像部分进行分割,以获得多个第一毛发区域;以及
    基于所述第一毛发区域确定每一待提取毛囊的所述第一坐标位置。
  4. 根据权利要求3所述的毛囊识别方法,其中,基于所述第一毛发区域确定每一待提取毛囊的所述第一坐标位置包括:
    利用每一第一毛发区域的最小外接矩形框以及毛发的生长方向来确定每一待提取毛囊的所述第一坐标位置。
  5. 根据权利要求3所述的毛囊点识别方法,其中,对所述第一图像部分进行分割,以获得多个第一毛发区域包括:
    对所述第一图像部分进行二值化处理,以获得所述多个第一毛发区域。
  6. 根据权利要求5所述的毛囊点识别方法,其中,在获得所述多个第一毛发区域之后,还包括:
    对所述多个第一毛发区域进行连通区域分析和形态学处理。
  7. 根据权利要求1所述的毛囊点识别方法,其中,按照所述提取路径,确定每一待提取毛囊的实际提取位置包括:
    基于所述第一坐标位置,获得所述提取路径中的第一个待提取毛囊在图像采集装置坐标系中的估计提取位置;
    指令机械臂移动到所述估计提取位置,并采集第二图像,其中,所述第二图像包括对应于所述毛囊待提取区域的第二图像部分;以及
    基于所述第二图像部分,确定所述第一个待提取毛囊的所述实际提取位置。
  8. 根据权利要求7所述的毛囊点识别方法,其中,基于所述第二图像部分,确定所述第一个待提取毛囊的所述实际提取位置包括:
    对所述第二图像部分进行分割,以获得多个第二毛发区域;
    基于所述第二毛发区域确定每一待提取毛囊在第二图像坐标系中的第二坐标位置;
    基于取发针在图像采集装置坐标系中的位置和方向,计算所述取发针在所述第二图像中的下针点位置;
    确定所有所述第二坐标位置中与所述下针点位置最接近的位置;以及
    通过对所述最接近的位置进行坐标系转换,获得当前待提取毛囊的所述实际提取位置。
  9. 根据权利要求1所述的毛囊点识别方法,其中,基于所述第一图像部分,确定所述毛囊待提取区域中的每一待提取毛囊在第一图像坐标系中的第一坐标位置包括:
    为所获取的第一图像建立第一图像坐标系,以表示所述第一图像的各个像素点在所述第一图像中的位置;
    基于所述第一图像中所包含的毛发颜色的像素点的密度和分布来标识出所述第一图像部分在所述第一图像中的位置,以从所述第一图像中提取出所述第一图像部分;
    基于所述第一图像部分,确定每一待提取毛囊在所述第一图像坐标系中的第一坐标位置了。
  10. 根据权利要求3所述的毛囊点识别方法,其中,对所述第一图像部分进行分割,以获得多个第一毛发区域包括:
    对所述第一图像部分进行二值化处理,以获得多个第一毛发区域;
    对所述多个第一毛发区域进行进行连通区域分析和形态学处理。
  11. 一种毛囊点识别装置,包括:
    存储器,所述存储器存储有机器可执行程序;以及
    处理器,所述处理器在执行所述机器可执行程序时,实现根据权利要求1-10中的任一项所述的毛囊点识别方法。
  12. 一种毛囊点识别系统,包括:
    机械臂,所述机械臂上安装有取发针;
    图像采集装置,其中所述图像采集装置被安装成跟随所述机械臂同步移动;
    控制装置,所述控制装置与所述机械臂和所述图像采集装置通信连接,并且所述控制装置被配置成与所述机械臂和所述图像采集装置进行信息交互,以实现根据权利要求1-10 中的任一项所述的毛囊点识别方法。
  13. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序在被处理器执行时实现根据权利要求1-10中的任一项所述的毛囊点识别方法。
PCT/CN2021/120636 2020-12-25 2021-09-26 毛囊点识别方法、系统、装置和存储介质 WO2022134703A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011563118.7A CN114694141A (zh) 2020-12-25 2020-12-25 毛囊点识别方法、系统、装置和存储介质
CN202011563118.7 2020-12-25

Publications (1)

Publication Number Publication Date
WO2022134703A1 true WO2022134703A1 (zh) 2022-06-30

Family

ID=82129123

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/120636 WO2022134703A1 (zh) 2020-12-25 2021-09-26 毛囊点识别方法、系统、装置和存储介质

Country Status (2)

Country Link
CN (1) CN114694141A (zh)
WO (1) WO2022134703A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117159157A (zh) * 2023-09-27 2023-12-05 北京碧莲盛不剃发植发医疗美容门诊部有限责任公司 一种用于不剃发植发的机械手及控制模块

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115346681A (zh) * 2022-08-29 2022-11-15 北京云数智康医疗科技有限公司 一种头部植发手术成活率ai评估分析计算系统及方法
CN116570349B (zh) * 2023-03-15 2024-04-26 磅客策(上海)智能医疗科技有限公司 一种毛囊提取系统、控制方法及存储介质
CN116747018A (zh) * 2023-06-28 2023-09-15 磅客策(上海)智能医疗科技有限公司 一种毛囊提取路径的规划方法、系统以及存储介质
CN116705336B (zh) * 2023-07-19 2024-02-09 北京云数智康医疗科技有限公司 一种基于影像分析的智能化植发评估系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101506825A (zh) * 2006-08-25 2009-08-12 修复型机器人公司 用于将毛囊单位分类的系统和方法
AU2011250755A1 (en) * 2005-09-30 2011-12-08 Restoration Robotics, Inc. Automated systems and methods for harvesting and implanting follicular units
US20160253799A1 (en) * 2013-11-01 2016-09-01 The Florida International University Board Of Trustees Context Based Algorithmic Framework for Identifying and Classifying Embedded Images of Follicle Units
CN109452959A (zh) * 2018-11-27 2019-03-12 王鹏君 一种无痕分层提取的方法及装置
CN111839616A (zh) * 2020-08-18 2020-10-30 重庆大学 一种毛囊提取结构的控制系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2011250755A1 (en) * 2005-09-30 2011-12-08 Restoration Robotics, Inc. Automated systems and methods for harvesting and implanting follicular units
CN101506825A (zh) * 2006-08-25 2009-08-12 修复型机器人公司 用于将毛囊单位分类的系统和方法
US20160253799A1 (en) * 2013-11-01 2016-09-01 The Florida International University Board Of Trustees Context Based Algorithmic Framework for Identifying and Classifying Embedded Images of Follicle Units
CN109452959A (zh) * 2018-11-27 2019-03-12 王鹏君 一种无痕分层提取的方法及装置
CN111839616A (zh) * 2020-08-18 2020-10-30 重庆大学 一种毛囊提取结构的控制系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117159157A (zh) * 2023-09-27 2023-12-05 北京碧莲盛不剃发植发医疗美容门诊部有限责任公司 一种用于不剃发植发的机械手及控制模块
CN117159157B (zh) * 2023-09-27 2024-02-06 北京碧莲盛不剃发植发医疗美容门诊部有限责任公司 一种用于不剃发植发的机械手及控制模块

Also Published As

Publication number Publication date
CN114694141A (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
WO2022134703A1 (zh) 毛囊点识别方法、系统、装置和存储介质
WO2017190656A1 (zh) 行人再识别方法和装置
JP6719457B2 (ja) 画像の主要被写体を抽出する方法とシステム
CN109241973B (zh) 一种纹理背景下的字符全自动软分割方法
JP6932402B2 (ja) スマートホームシーン向けのマルチジェスチャー精分割方法
CN109886170B (zh) 一种钉螺智能检测识别与统计系统
CN110781877B (zh) 一种图像识别方法、设备及存储介质
CN112785591B (zh) 一种ct影像中肋骨骨折的检测与分割方法及装置
CN110276279B (zh) 一种基于图像分割的任意形状场景文本探测方法
CN109344820A (zh) 基于计算机视觉和深度学习的数字式电表读数识别方法
JP2007272435A (ja) 顔特徴抽出装置及び顔特徴抽出方法
CN114049499A (zh) 用于连续轮廓的目标对象检测方法、设备及存储介质
CN112819812B (zh) 基于图像处理的粉末床缺陷检测方法
KR20170015299A (ko) 배경 추적을 통한 오브젝트 추적 및 분할을 위한 방법 및 장치
JP2011248702A (ja) 画像処理装置、画像処理方法、画像処理プログラム及びプログラム記憶媒体
CN106683105B (zh) 图像分割方法及图像分割装置
CN110400287B (zh) 结直肠癌ihc染色图像肿瘤侵袭边缘和中心的检测系统及方法
CN115578741A (zh) 一种基于Mask R-cnn算法和类型分割的扫描文件版面分析方法
CN110136139B (zh) 基于形状特征的面部ct图像中的牙神经分割方法
CN110889374A (zh) 印章图像处理方法、装置、计算机及存储介质
CN113793385A (zh) 鱼头鱼尾定位方法及装置
CN110348353B (zh) 一种图像处理方法及装置
CN116681579A (zh) 一种实时视频人脸替换方法、介质及系统
CN108898045B (zh) 基于深度学习的手势识别的多标签图像预处理方法
CN116703748A (zh) 书法作品评价方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21908704

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21908704

Country of ref document: EP

Kind code of ref document: A1