CN109544516B - Image detection method and device - Google Patents

Image detection method and device Download PDF

Info

Publication number
CN109544516B
CN109544516B CN201811309965.3A CN201811309965A CN109544516B CN 109544516 B CN109544516 B CN 109544516B CN 201811309965 A CN201811309965 A CN 201811309965A CN 109544516 B CN109544516 B CN 109544516B
Authority
CN
China
Prior art keywords
image
area
area image
sample
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811309965.3A
Other languages
Chinese (zh)
Other versions
CN109544516A (en
Inventor
鞠汶奇
刘子威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hetai Intelligent Home Appliance Controller Co ltd
Original Assignee
Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Het Data Resources and Cloud Technology Co Ltd filed Critical Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority to CN201811309965.3A priority Critical patent/CN109544516B/en
Publication of CN109544516A publication Critical patent/CN109544516A/en
Application granted granted Critical
Publication of CN109544516B publication Critical patent/CN109544516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image detection method and device. Wherein, the method comprises the following steps: determining a target image, wherein the target image is an image with the number of blackheads to be detected; inputting the target image into a depth detection neural network, and outputting the blackhead position of a first area image and the position of a second area image, wherein the reflection degree of the second area image is greater than that of the first area image; inputting the second area image into a deep neural network, and outputting the number of blackheads in the second area image; determining the number of black heads in the first area image according to the black head position of the first area image, and determining the number of black heads in the target image according to the number of black heads in the first area image and the number of black heads in the second area image. By the method and the device, accuracy of blackhead detection can be improved.

Description

Image detection method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image detection method and an image detection device.
Background
Currently, facial skin problems affect people's clown, where blackheads are a common skin problem. Meanwhile, the number of blackheads is different due to different genders, ages, regional attributes and the like.
In the field of beauty and make-up, a picture can be automatically shot through a terminal such as a mobile phone, the number of the blackheads of the human face can be automatically detected, and other skin characteristics of the human face are combined, so that skin care opinions are provided for a user. For example, by detecting the number of the blackheads of the face, a proper skin care product and food can be selected for the user to improve the skin problem of the face. Specifically, the method for detecting the number of blackheads generally includes irradiating the skin with two light beams, namely, a common white light beam and an ultraviolet light beam, so as to extract a highlight target in the ultraviolet light beam, obtain a target highlight image, then generating a blackhead area image according to the target highlight image, and further marking a blackhead area in the white light image by using the blackhead area image as a mask to identify the blackhead.
The method needs to irradiate the skin by means of ultraviolet rays, and has complex equipment requirement and low accuracy.
Disclosure of Invention
The application provides an image detection method and device, which can improve the accuracy of blackhead detection.
In a first aspect, an embodiment of the present application provides an image detection method, including:
determining a target image, wherein the target image is an image with the number of blackheads to be detected;
inputting the target image into a depth detection neural network, and outputting the blackhead position of a first area image and the position of a second area image, wherein the reflection degree of the second area image is greater than that of the first area image;
inputting the second area image into a deep neural network, and outputting the number of blackheads in the second area image;
determining the number of black heads in the first area image according to the black head position of the first area image, and determining the number of black heads in the target image according to the number of black heads in the first area image and the number of black heads in the second area image.
In the embodiment of the application, after a target image is determined, the blackhead position of a first area image and the position of a second area image are determined through a depth detection neural network, wherein the light reflection degree of the second area image is greater than that of the first area image; then, inputting the second area image into a deep neural network to obtain the number of blackheads of the second area image; and finally, obtaining the number of the black heads in the first area image, and obtaining the number of the black heads in the target image according to the number of the black heads in the first area image and the number of the black heads in the second area image. Through implementing this application embodiment, can be with the reflection of light degree stronger the blackhead quantity of the second region image detects alone, can avoid leading to the condition that can't accurately calculate blackhead quantity because of factors such as illumination intensity, and then improved the rate of accuracy that the blackhead detected.
In one possible implementation, the target image is a region corresponding to a nose; the determining the target image comprises:
after a face image is acquired, determining face key points in the face image; the face key points in the face image comprise a first key point and a second key point; the first key point is the leftmost point of the alar nose in the face key points, and the second key point is the rightmost point of the alar nose in the face key points;
and determining the target image according to the first key point and the second key point.
In a possible implementation manner, after the obtaining of the face image, determining a face key point in the face image, and determining the target image according to the first key point and the second key point includes:
after a face image is acquired, determining face key points in the face image; the face key points in the face image comprise a first key point, a second key point, a third key point and a fourth key point; the first key point is a key point 31 in the face key points, the second key point is a key point 35 in the face key points, the third key point is one of a key point 28 and a key point 29 in the face key points, and the fourth key point is a key point 33 in the face key points;
determining a first side length of the target image according to the abscissa of the first key point and the abscissa of the second key point;
determining a second side length of the target area according to the ordinate of the third key point and the ordinate of the fourth key point;
and determining the target image according to the first side length and the second side length.
In the embodiment of the application, the target image is determined through the first key point, the second key point, the third key point and the fourth key point, the method is simple and feasible, and the determination efficiency of the target image is improved.
In one possible implementation, the determining the target image includes:
after a face image is acquired, determining face key points in the face image; the face key points in the face image comprise a fifth key point, a sixth key point and a seventh key point; the fifth key point is a key point 30 in the face key points, the sixth key point is a key point 0 in the face key points, and the seventh key point is a key point 16 in the face key points;
determining a central point of the target image according to the fifth key point;
determining a third side length of the target image according to the abscissa of the sixth key point and the abscissa of the seventh key point;
determining a fourth side length of the target area according to the third side length;
and determining the target image according to the central point, the third side length and the fourth side length.
In one possible implementation manner, the inputting the target image to a depth detection neural network, and outputting a blackhead position of the first area image and a position of the second area image includes:
reducing the resolution of the target image to obtain the target image with the reduced resolution;
inputting the target image with the reduced resolution into the depth detection neural network, and outputting the blackhead position of the first area image and the position of the second area image;
or, enhancing the resolution of the target image to obtain the target image with the enhanced resolution;
and inputting the target image with the enhanced resolution into the depth detection neural network, and outputting the blackhead position of the first area image and the position of the second area image.
In the embodiment of the application, the operation speed of the depth detection neural network can be increased by reducing the resolution of the target image, and the calculation efficiency of blackhead detection is improved; by enhancing the resolution of the target image, the target image can be clearer, and the accuracy of blackhead detection is improved.
In a possible implementation manner, before the inputting the second area image into the deep neural network and outputting the number of blackheads in the second area image, the method further includes:
acquiring a first sample image and a second sample image, wherein the reflection degree of the second sample image is greater than that of the first sample image, and an object included in the first sample image is the same as an object included in the second sample image;
determining the number of blackheads of the first sample image;
inputting the second sample image and the number of blackheads of the first sample image into the deep neural network, and training the deep neural network.
The object included in the first sample image is the same as the object included in the second sample image, which can be understood that the first sample image and the second sample image are obtained by shooting the same face image or the same person, that is, the first sample image and the second sample image belong to the same group of sample images, or the first sample image and the second sample image are in a one-to-one correspondence relationship; after the first sample image and the second sample image are obtained, the number of black heads in the first sample image can be determined; it is understood that the number of the first sample image and the second sample image includes at least two groups.
In the embodiment of the application, the deep neural network is trained by the method, so that the efficiency of training the deep neural network can be effectively improved, namely, the accuracy of the output detection result of the deep neural network is improved.
In one possible implementation, the acquiring the first sample image and the second sample image includes:
acquiring the first sample image under a first light source;
acquiring the second sample image under a second light source, wherein the illumination intensity of the first light source is smaller than that of the second light source.
In the embodiment of the present application, the first sample image obtained under the first light source may be understood as a clear and non-reflective first sample image obtained under normal illumination, that is, the number of black heads in the first sample image can be clearly known from the first sample image obtained under the first light source; the second sample image obtained under the second light source can be understood as the second sample image with the light reflection region obtained under the strong light, that is, the number of blackheads in the second sample image obtained under the second light source has an unknown area. By implementing the embodiment of the application, the sample images are obtained under different illumination, the unicity of the sample images can be avoided, and the diversity of the sample images under different illumination scenes can be increased, so that the accuracy of deep neural network training is improved.
In a second aspect, an embodiment of the present application provides an image detection apparatus, including:
the device comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is used for determining a target image, and the target image is an image with the number of blackheads to be detected;
the first input and output unit is used for inputting the target image to the depth detection neural network and outputting the blackhead position of the first area image and the position of the second area image, and the light reflection degree of the second area image is greater than that of the first area image;
the second input and output unit is used for inputting the second area image into a deep neural network and outputting the number of blackheads in the second area image;
a second determining unit, configured to determine the number of blackheads in the first area image according to the blackhead position of the first area image, and determine the number of blackheads in the target image according to the number of blackheads in the first area image and the number of blackheads in the second area image.
In one possible implementation, the target image is a region corresponding to a nose;
the first determination unit includes:
the first determining subunit is used for determining the key points of the face in the face image after the face image is obtained; the face key points in the face image comprise a first key point, a second key point, a third key point and a fourth key point; the first key point is the leftmost point of the nasal wing in the face key point, the second key point is the rightmost point of the nasal wing in the face key point, the third key point is the uppermost point of the nasal bridge in the face key point, and the fourth key point is the lowermost point of the nasal septum in the face key point;
the second determining subunit is used for determining the first side length of the target image according to the abscissa of the first key point and the abscissa of the second key point;
the third determining subunit is configured to determine a second side length of the target area according to the ordinate of the third key point and the ordinate of the fourth key point;
and the fourth determining subunit is used for determining the target image according to the first side length and the second side length.
In one possible implementation, the first input-output unit includes:
the reducing subunit is used for reducing the resolution of the target image to obtain the target image with the reduced resolution;
a first input/output subunit, configured to input the target image with the reduced resolution to the depth detection neural network, and output a blackhead position of a first area image and a position of a second area image;
or, the enhancement unit is used for enhancing the resolution of the target image to obtain the target image with the enhanced resolution;
and the second input and output subunit is used for inputting the target image with the enhanced resolution into the depth detection neural network and outputting the blackhead position of the first area image and the position of the second area image.
In one possible implementation, the apparatus further includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first sample image and a second sample image, the reflection degree of the second sample image is greater than that of the first sample image, and an object included in the first sample image is the same as that included in the second sample image;
a third determining unit configured to determine the number of blackheads of the first sample image;
and the training unit is used for inputting the second sample image and the number of the blackheads of the first sample image into the deep neural network and training the deep neural network.
In one possible implementation manner, the obtaining unit includes:
a first acquisition subunit, configured to acquire the first sample image under a first light source;
and the second acquisition subunit is used for acquiring the second sample image under a second light source, wherein the illumination intensity of the first light source is less than that of the second light source.
In a third aspect, an embodiment of the present application further provides an image detection apparatus, including: the system comprises a processor, a memory and an input/output interface, wherein the processor, the memory and the input/output interface are interconnected through lines; wherein the memory stores program instructions; the program instructions, when executed by the processor, cause the processor to perform the respective method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions that, when executed by a processor of an image detection apparatus, cause the processor to perform the method of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a schematic flowchart of an image detection method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a face key point provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of determining a target image according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of a deep neural network training method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image detection apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a first determining subunit provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a first input/output unit according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another first input/output unit according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of another image detection apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an obtaining unit provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of another image detection apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the present application will be further described in detail with reference to the accompanying drawings.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, or apparatus.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image detection method provided in an embodiment of the present application, where the image detection method may be applied to an image detection apparatus, the image detection apparatus may include a server or a terminal device, and the terminal device may include a mobile phone, a desktop computer, a laptop computer, and other devices.
As shown in fig. 1, the image detection method includes:
101. and determining a target image, wherein the target image is the image with the number of the blackheads to be detected.
In the embodiment of the application, determining the target image may be understood as acquiring or acquiring the target image by the image detection device, so as to determine the target image; it can also be understood that the image detection device acquires or acquires the target image from other devices, and the embodiments of the present application are not limited to how the image detection device acquires or acquires the target image. It can be understood that the target image is an image of the number of blackheads to be detected, that is, the target image is an image for which the severity of the blackheads needs to be determined.
Optionally, the target image is a region corresponding to a nose, and a nose region may be directly captured from the target image, for example, captured based on a fixed length and width. However, when the nose is cut with a fixed length and width, the cut nose area is different because the nose size is different for each person. Accordingly, embodiments of the present application further provide a method for determining an image of a nose region, as follows:
after a face image is acquired, determining face key points in the face image; the face key points in the face image comprise a first key point and a second key point; the first key point is the leftmost point of the alar nose in the face key points, and the second key point is the rightmost point of the alar nose in the face key points;
and determining the target image according to the first key point and the second key point.
The embodiment of the application also provides a method for determining the image of the nose area through the key points of the face.
Specifically, after the face image is obtained, determining face key points in the face image, and determining the target image according to the first key points and the second key points include:
after a face image is acquired, determining face key points in the face image; the face key points in the face image comprise a first key point, a second key point, a third key point and a fourth key point; the first key point is a key point 31 of the face key points, the second key point is a key point 35 of the face key points, the third key point is one of a key point 28 and a key point 29 of the face key points, and the fourth key point is a key point 33 of the face key points;
determining a first side length of the target image according to the abscissa of the first key point and the abscissa of the second key point;
determining a second side length of the target area according to the ordinate of the third key point and the ordinate of the fourth key point;
and determining the target image according to the first side length and the second side length.
In the embodiment of the application, the target image is determined through the first key point, the second key point, the third key point and the fourth key point, the method is simple and feasible, and the determination efficiency of the target image is improved.
In the embodiment of the present application, the method for determining the face key points in the face image includes: can be determined by algorithms such as edge detection robert algorithm, sobel algorithm, etc.; but also by correlation models such as active contour snake models and the like.
Although the various algorithms or models can be used to determine the face key points in the face image, the above methods are relatively complex on one hand and have poor effect on the other hand. Therefore, the embodiment of the present application provides a simple method, which is not only simple to implement, but also can effectively determine key points of a face, as follows:
the above determining the face key points in the face image includes:
and determining the key points of the human face in the human face image through a third-party application.
In the embodiment of the application, the third-party application may be a third-party tool package dlib, where dlib is a tool package with a good positioning effect on the open-source face key points and is a C + + open-source tool package including a machine learning algorithm. Tool packages dlib are currently widely used in areas including robotics, embedded devices, mobile phones and large high-performance computing environments. Therefore, the tool kit can be effectively used for positioning the key points of the face to obtain the key points of the face. Specifically, the face key points may be 68 face key points, and so on. As shown in fig. 2, fig. 2 is a schematic diagram of a face key point provided in the embodiment of the present application. It can be seen that the face key points may include key point 0, key point 1 … …, key point 67, i.e. 68 key points.
In the embodiment of the present application, referring to fig. 2, fig. 2 is a schematic diagram of a face key point provided in the embodiment of the present application, as shown in fig. 2, the face key point includes key points 31, 35, 29, and 33 and is located in a nose region, and therefore, the key point key points 31, 35, 29, and 33 are used as reference key points. It can be understood that when the nose key points are positioned through the face key points, each key point has coordinates, namely pixel point coordinates. Thus, the first side length of the target area is taken as the abscissa of the key points 31 and 35, and the second side length of the target area is taken as the ordinate of the key points 29 and 33. For example, selecting the key point 31 on the leftmost side of the alar nose and the key point 35 on the rightmost side of the nose as reference points, wherein the x1 coordinate is the abscissa of the key point 31, and the x2 coordinate is the abscissa of the key point 35; and selecting the key point 29 at the uppermost part of the nasal bridge and the key point 33 at the lowermost part of the nasal septum as reference points, wherein the y1 coordinate is the ordinate of the key point 29, and the y2 coordinate is the ordinate of the key point 33, determining the nose area through coordinates (x1, y1, x2 and y2), and accordingly intercepting an image of the nose area as an image of the target area. As shown in fig. 3. Fig. 3 is a schematic diagram of determining a target image according to an embodiment of the present application. In fig. 3, the length of the target region is the difference between the abscissas of the key points 31 and 35, and the width of the target region is the difference between the ordinates of the key points 29 and 33. By implementing the embodiment of the application, the target area image can be rapidly positioned, and the detection efficiency is improved.
It can be understood that in the embodiment of the present application, the coordinate systems of the abscissa and the ordinate of the first keypoint and the second keypoint are consistent, the coordinate systems of the abscissa and the ordinate of the third keypoint and the fourth keypoint are consistent, and the coordinate systems of the first keypoint and the second keypoint and the third keypoint and the fourth keypoint are consistent. For example, the abscissa of the first and second keypoints and the abscissa of the third and fourth keypoints may be normalized by the coordinates in the pixel coordinate system, and the abscissa of the first and second keypoints and the ordinate of the third and fourth keypoints may be normalized by the pixel coordinate system.
Optionally, an embodiment of the present application further provides another method for determining a target image, as follows:
after a face image is acquired, determining face key points in the face image; the face key points in the face image comprise a fifth key point, a sixth key point and a seventh key point; the fifth key point is a key point 30 of the face key points, the sixth key point is a key point 0 of the face key points, and the seventh key point is a key point 16 of the face key points;
determining the central point of the target image according to the fifth key point;
determining a third side length of the target image according to the abscissa of the sixth key point and the abscissa of the seventh key point;
determining a fourth side length of the target area according to the third side length;
and determining the target image according to the central point, the third side length and the fourth side length.
Specifically, in the embodiment of the present application, referring to fig. 2, fig. 2 is a schematic diagram of face key points provided in the embodiment of the present application, as shown in fig. 2, the face key points include key points 30, 0, and 16, and the key point key points 30, 0, and 16 are used as reference key points. It can be understood that when the nose region is located by the face key points, each key point has coordinates, i.e., pixel point coordinates. Therefore, the key point 30 is taken as the center point of the nose region; taking one fourth of the absolute value difference of the abscissas of the key points 0 and 16 as the third side length of the nose area, namely the length of the nose area; the third side length is taken as the fourth side length of the nose region, i.e. the width and length of the nose region are equal. For example, the key point 30 is selected as the center point of the nose region; selecting a key point 0 and a key point 16 as datum points, wherein the x3 coordinate is the abscissa of the key point 0, and the x4 coordinate is the abscissa of the key point 16; the image of the nose region is cut out as the image of the above-mentioned target region by calculating the difference in absolute value of the abscissa (x3, x4) and taking one-fourth of the difference as the length and width of the nose region. By implementing the embodiment of the application, the target area image can be rapidly positioned, and the detection efficiency is improved. It is understood that the specific determination manner of the side length of the nose region in the embodiment of the present application is not limited.
It can be understood that, in the embodiment of the present application, coordinate systems of the abscissa and the ordinate of the fifth keypoint, the sixth keypoint, and the seventh keypoint are in standard agreement. For example, the abscissa of the fifth and sixth keypoints and the abscissa of the seventh keypoint may be based on coordinates in a pixel coordinate system.
102. And inputting the target image into a depth detection neural network, and outputting the blackhead position of a first area image and the position of a second area image, wherein the reflection degree of the second area image is greater than that of the first area image.
In the embodiment of the application, the obtained target image may have reflection in different degrees, the region with a higher reflection degree (i.e., a high reflection light region or a strong reflection light region) may seriously affect the detection of the face blackheads, and the other regions with a lower reflection degree have a smaller influence on the detection of the face blackheads. It can be understood that the first area image is an area in which the degree of light reflection does not affect the detection pair of the blackhead position in the first area image by the depth detection neural network. The second area image is an area having a greater degree of light reflection than the first area image, and it is understood that the second area image is a high light reflection area or a strong light reflection area, that is, the blackhead position in the second area image cannot be detected by the depth detection neural network.
In the embodiment of the present application, the deep detection neural network may be understood as a target detection algorithm (you only look once, Yolo). Specifically, the target image is input into the Yolo network, and the target image is divided into an S × S grid, where S is an integer greater than or equal to 1; then, each mesh is responsible for detecting an object whose center point falls within the mesh, where each mesh predicts B bounding boxes (bounding boxes) and confidence scores (confidence scores) of the bounding boxes, and the size and position of the bounding boxes may be characterized by 4 values: (x, y, w, h), where (x, y) is the center coordinates of the bounding box, and w and h are the width and height of the bounding box; for each grid, C class probability values are also predicted, which represent the probability that the object of the bounding box for which the grid is responsible for predicting belongs to the respective class, i.e. the conditional probability of these probability values at the confidence of the respective bounding box. Thus, the prediction value of each bounding box actually contains 5 elements: (x, y, w, h, c), where the first 4 represent the size and position of the bounding box and the last value is the confidence. S × B target windows can be predicted finally, the target window with the possibility of being removed by setting a threshold, and the redundant window is removed by a non maximum suppression algorithm (NMS).
For example, when the target image is input, the target image is divided into 7 × 7 (S ═ 7) meshes, and 2 frames are predicted for each mesh (B ═ 7), 7 × 7 × 2 target windows may be predicted, target windows with low probability are removed according to a threshold, and finally redundant windows are removed by the non-maximum suppression algorithm NMS. By implementing the embodiment of the application, the position of the black head and the high-light-reflection area image can be rapidly detected through the depth detection neural network, the speed of subsequently detecting the number of the black heads of the high-light-reflection area image can be increased, and the black head detection efficiency is improved.
Optionally, the inputting the target image into the depth detection neural network, and outputting the blackhead position of the first area image and the position of the second area image includes:
reducing the resolution of the target image to obtain a target image with the reduced resolution;
inputting the target image with the reduced resolution into the depth detection neural network, and outputting the blackhead position of the first area image and the position of the second area image;
or, enhancing the resolution of the target image to obtain the target image with the enhanced resolution;
and inputting the target image with the enhanced resolution into the depth detection neural network, and outputting the blackhead position of the first area image and the position of the second area image.
In this embodiment, a user may set a resolution of an image input to the depth detection neural network as needed, or the image processing apparatus may automatically set a resolution of an image input to the depth detection neural network, so that the image input to the depth detection neural network may be understood as the target image. For example, when the resolution of the target image is 664 × 664, the resolution of the target image may be reduced to 448 × 448; when the resolution of the target image is 224 × 224, the resolution of the target image may be reduced to 448 × 448. By implementing the embodiment of the application, the operation speed of the depth detection neural network can be increased and the operation efficiency can be improved by reducing the resolution of the target image; by enhancing the resolution of the target image, the definition of the target image can be increased, and the detection accuracy is improved. It can be understood that the embodiments of the present application are not limited to how the resolution size is set, and the specific value of the resolution size.
103. And inputting the second area image into a deep neural network, and outputting the number of blackheads in the second area image.
The deep neural network may be understood as a Back Propagation (BP) neural network. Specifically, the second area image is a high-reflection area image, and the high-reflection area image is input into the BP neural network, and the BP neural network can predict the number of blackheads in the high-reflection area image. By implementing the embodiment of the application, the second area image is independently input into the deep neural network, so that more accurate black head number can be obtained, the influence of the reflection area on the accurate calculation of the black head number can be avoided, and the accuracy of black head number detection is improved. It is understood that the embodiments of the present application are not limited to the specific deep neural network.
104. Determining the number of black heads in the first area image according to the black head position of the first area image, and determining the number of black heads in the target image according to the number of black heads in the first area image and the number of black heads in the second area image.
In the embodiment of the present application, the blackhead position of the first area image is obtained through the Yolo network, the coordinates of the blackhead position can be obtained through the method, and the number of the blackheads in the first area image can be obtained through counting the number of the coordinates; alternatively, the number of blackheads in the first region image may be detected by other target detection algorithms, such as background subtraction, optical flow field method, inter-frame difference method, and the like; and adding the number of the black heads of the first area image and the number of the black heads of the second area image to finally obtain the number of the black heads of the whole target image. It can be understood that the embodiment of the present application is not limited to the specific implementation manner of how to calculate the number of blackheads.
By implementing the embodiment of the application, the blackhead quantity of the second area image with stronger light reflection degree can be separately detected, the situation that the blackhead quantity cannot be accurately calculated due to factors such as illumination intensity and the like can be avoided, and the accuracy of blackhead detection is further improved.
For the image detection method shown in fig. 1, the deep neural network is a trained network model, that is, the deep neural network is obtained by training the network model. Therefore, an embodiment of the present application further provides a method for training a network model, referring to fig. 4, fig. 4 is a schematic flowchart of a deep neural network training method provided in an embodiment of the present application, and as shown in fig. 4, the training method includes:
401. acquiring a first sample image and a second sample image, wherein the degree of light reflection of the second sample image is greater than that of the first sample image, and an object included in the first sample image is the same as an object included in the second sample image.
In this embodiment of the application, an object included in a first sample image is the same as an object included in the second sample image, which may be understood that the first sample image and the second sample image are obtained by shooting a same face image or a same person, that is, the first sample image and the second sample image belong to a same group of sample images, or the first sample image and the second sample image are in a one-to-one correspondence relationship; after the first sample image and the second sample image are obtained, the number of black heads in the first sample image can be determined; it is understood that the number of the first sample image and the second sample image includes at least two groups.
The image detection device can acquire the first sample image and the second sample image, and can also acquire the first sample image and the second sample image through other devices. The following describes how to acquire the first sample image and the second sample image by taking the image detection apparatus as an example to acquire the first sample image and the second sample image. The above acquiring the first sample image and the second sample image includes: acquiring M sample images as the first sample image and the second sample image, wherein the sample images are nose region sample images.
Wherein the value of M is greater than or equal to 300 as described above. In the embodiment of the present application, acquiring the sample image sample specifically includes acquiring a nose region sample image, where the number of the nose region sample images is at least 300. In order to train the network model better, the nose region sample image may include an image of the nose with a high light reflection region, and may also include an image of the nose without the high light reflection region. The embodiment of the application does not limit what kind of device is adopted to collect the nose area sample image. Such as a mobile phone, a camera, etc.
The collected nose area sample images are at least 300, and the reason is that in the training process, the training effect of the less than 300 nose area sample images is not good as that of the images which are more than or equal to 300. On the other hand, when the collected nose area sample image is larger than or equal to 300, the generalization capability of the trained network model is better.
Optionally, the acquiring the first sample image and the second sample image includes:
acquiring the first sample image under a first light source;
and acquiring the second sample image under a second light source, wherein the illumination intensity of the first light source is less than that of the second light source.
In this embodiment, the image detection apparatus may obtain different sample images through different light sources. The first sample image obtained under the first light source can be understood as a clear and non-reflective first sample image obtained under normal illumination, that is, the number of black heads in the first sample image can be clearly known from the first sample image obtained under the first light source; the second sample image obtained under the second light source can be understood as the second sample image with the light reflection region obtained under the strong light, that is, the number of blackheads in the second sample image obtained under the second light source has an unknown area. For example, the first sample image and the second sample image may be obtained under different illumination conditions such as candle light, kerosene light, iodine-tungsten light, tungsten filament light, photographic highlight light, cloud sky, cloudy sky, and the like, so that the samples may be enriched, and the diversity of training samples may be improved. By implementing the embodiment of the application, the sample images are obtained under different illumination, the unicity of the sample images can be avoided, and the diversity of the sample images under different illumination scenes can be increased, so that the accuracy of deep neural network training is improved.
402. The number of blackheads of the first sample image is determined.
In the embodiment of the present application, the image processing apparatus may determine the number of blackheads of the first sample image. For example, the first sample image may be input to the Yolo network to obtain coordinates of the black heads in the first sample image, and then the number of the black heads in the first sample image is determined by counting the number of the coordinates; alternatively, the number of blackheads in the first sample image can be detected by other target detection algorithms, such as background subtraction, optical flow field method, interframe difference method, and the like; or, the number of the blackheads of the first sample image can be obtained in a manual counting mode. It is understood that the embodiment of the present application is not limited to the specific determination manner of the number of blackheads in the first sample image.
403. And inputting the second sample image and the number of the blackheads of the first sample image into the deep neural network, and training the deep neural network.
In this embodiment, the deep neural network may be a BP neural network. Specifically, firstly, network initialization is carried out on the BP neural network, each connection weight is assigned with a random number with an interval of [ -1,1], an error function e is set, and calculation precision and learning rate are set; then, randomly selecting an nth training sample and a corresponding expected output, wherein n is a value which is more than or equal to 1 and less than or equal to the total number of samples; then, calculating the input and output of each neuron of the hidden layer; then, calculating partial derivatives of the error function to each neuron of the output layer by utilizing the expected output and the actual output of the network; then, updating the connection weight by utilizing the partial derivative of each neuron of the output layer and the output of each neuron of the hidden layer, and updating the connection weight by utilizing the partial derivative of each neuron of the hidden layer and the input of each neuron of the input layer; then, after the connection weight of the model is corrected, the global error of a new model is recalculated; then, judging whether the current model converges, such as judging whether the difference between two adjacent errors is smaller than a specified value; otherwise, selecting the next random learning sample and the corresponding expected output, and executing the next learning; and finally, training to obtain the BP neural network model. By implementing the embodiment of the application, the detection precision can be effectively improved by training through the method. It can be understood that the embodiments of the present application are not limited to the specific network model of the deep neural network described above, and the specific training mode of the network.
By implementing the embodiment of the application, the sample images are obtained under different illumination, the diversity of the sample images can be increased, the sample images are input into the deep neural network for training, the training difficulty can be increased through a large number of samples with different characteristics, and the accuracy of deep neural network training is further improved.
It will be appreciated that the method embodiments shown in fig. 1 and 4 are of particular importance, and that implementations not described in detail in one embodiment may also refer to other embodiments.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image detection apparatus according to an embodiment of the present application, and as shown in fig. 5, the image detection apparatus includes:
a first determining unit 501, configured to determine a target image, where the target image is an image with the number of blackheads to be detected;
a first input/output unit 502, configured to input the target image into a depth detection neural network, and output a blackhead position of a first area image and a position of a second area image, where a degree of reflection of the second area image is greater than a degree of reflection of the first area image;
a second input/output unit 503, configured to input the second area image into a deep neural network, and output the number of blackheads in the second area image;
a second determining unit 504, configured to determine the number of blackheads in the first area image according to the blackhead position of the first area image, and determine the number of blackheads in the target image according to the number of blackheads in the first area image and the number of blackheads in the second area image.
Optionally, the target image is a region corresponding to a nose;
referring to fig. 6, fig. 6 is a schematic structural diagram of a first determining subunit provided in an embodiment of the present application, and as shown in fig. 6, the first determining unit 501 includes:
the first determining subunit 5011 is configured to determine a face key point in the face image after the face image is acquired; the face key points in the face image comprise a first key point, a second key point, a third key point and a fourth key point; the first key point is the leftmost point of the nasal wing in the face key point, the second key point is the rightmost point of the nasal wing in the face key point, the third key point is the uppermost point of the nasal bridge in the face key point, and the fourth key point is the lowermost point of the nasal septum in the face key point;
a second determining subunit 5012, configured to determine a first side length of the target image according to an abscissa of the first key point and an abscissa of the second key point;
a third determining subunit 5013, configured to determine a second side length of the target area according to a vertical coordinate of the third key point and a vertical coordinate of the fourth key point;
the fourth determining subunit 5014 is configured to determine the target image according to the first side length and the second side length.
Optionally, referring to fig. 7, fig. 7 is a schematic structural diagram of a first input/output unit provided in an embodiment of the present application, and as shown in fig. 7, the first input/output unit 502 includes:
a reduction subunit 5021, configured to reduce the resolution of the target image to obtain a reduced-resolution target image;
a first input/output subunit 5022, configured to input the reduced-resolution target image to the depth detection neural network, and output a blackhead position of the first area image and a position of the second area image;
alternatively, referring to fig. 8, fig. 8 is a schematic structural diagram of another first input/output unit provided in an embodiment of the present application, and as shown in fig. 8, the first input/output unit 502 includes:
an enhancer unit 5023, configured to enhance the resolution of the target image to obtain a resolution-enhanced target image;
a second input/output subunit 5024, configured to input the target image with enhanced resolution to the depth detection neural network, and output the blackhead position of the first area image and the position of the second area image.
Optionally, referring to fig. 9, fig. 9 is a schematic structural diagram of another image detection apparatus provided in an embodiment of the present application, and as shown in fig. 9, the apparatus further includes:
an obtaining unit 505, configured to obtain a first sample image and a second sample image, where a degree of light reflection of the second sample image is greater than a degree of light reflection of the first sample image, and an object included in the first sample image is the same as an object included in the second sample image;
a third determining unit 506, configured to determine the number of blackheads of the first sample image;
a training unit 507, configured to input the number of blackheads of the second sample image and the first sample image into the deep neural network, and train the deep neural network.
Optionally, referring to fig. 10, fig. 10 is a schematic structural diagram of an obtaining unit provided in an embodiment of the present application, and as shown in fig. 10, the obtaining unit 505 includes:
a first acquisition sub-unit 5051 for acquiring the first sample image under a first light source;
a second acquiring subunit 5052 is configured to acquire the second sample image under a second light source, where the illumination intensity of the first light source is smaller than the illumination intensity of the second light source.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an image detection apparatus provided in an embodiment of the present application, where the image detection apparatus includes a processor 1101, a memory 1102, and an input/output interface 1103, and the processor 1101, the memory 1102, and the input/output interface 1103 are connected to each other through a bus.
The memory 1102 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 1102 is used for related instructions and data.
The input/output interface 1103 can communicate with another device through the input/output interface.
The processor 1101 may be one or more Central Processing Units (CPUs), and in the case where the processor 1101 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
Specifically, the implementation of each operation may also correspond to the corresponding description of the method embodiments shown in fig. 1 and fig. 4. And the implementation of the respective operations may also correspond to the respective description of the apparatus embodiments shown in fig. 5, 6, 7, 8, 9 and 10.
As an embodiment, the processor 1101 may be configured to execute the methods shown in step 101 and step 104, and as another example, the processor 1101 may also be configured to execute the methods executed by the first determining unit 501, the second determining unit 504, and the like.
As another example, in an embodiment, the processor 1101 may be configured to determine the first sample image and the second sample image, or determine the target image, or may also acquire the first sample image and the second sample image or the target image through the input/output interface 1103, and how to acquire the first sample image and the second sample image or the target image is not limited in this embodiment of the application.
Also for example, in one embodiment, the input-output interface 1103 may be further configured to perform the methods performed by the first input-output unit 502 and the second input-output unit 503.
It will be appreciated that fig. 11 only shows a simplified design of the image detection apparatus. In practical applications, the data processing apparatus may further include other necessary components, including but not limited to any number of input/output interfaces, processors, memories, etc., and all image detection apparatuses that can implement the embodiments of the present application are within the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (8)

1. An image detection method, comprising:
determining a target image, wherein the target image is an image with the number of blackheads to be detected;
inputting the target image into a depth detection neural network, and outputting the blackhead position of a first area image and the position of a second area image, wherein the reflection degree of the second area image is greater than that of the first area image;
inputting the second area image into a deep neural network, and outputting the number of blackheads in the second area image;
determining the number of black heads in the first area image according to the black head position of the first area image, and determining the number of black heads in the target image according to the number of black heads in the first area image and the number of black heads in the second area image;
before the inputting the second area image into the deep neural network and outputting the number of blackheads in the second area image, the method further comprises:
acquiring a first sample image and a second sample image, wherein the reflection degree of the second sample image is greater than that of the first sample image, and an object included in the first sample image is the same as an object included in the second sample image;
determining the number of blackheads of the first sample image;
inputting the second sample image and the number of blackheads of the first sample image into the deep neural network, and training the deep neural network.
2. The method of claim 1, wherein the target image is a region corresponding to a nose; the determining the target image comprises:
after a face image is acquired, determining face key points in the face image; the face key points in the face image comprise a first key point and a second key point; the first key point is the leftmost point of the alar nose in the face key points, and the second key point is the rightmost point of the alar nose in the face key points;
and determining the target image according to the first key point and the second key point.
3. The method according to claim 1 or 2, wherein the inputting the target image into a depth detection neural network and outputting a blackhead position of a first region image and a position of a second region image comprises:
reducing the resolution of the target image to obtain the target image with the reduced resolution;
inputting the target image with the reduced resolution into the depth detection neural network, and outputting the blackhead position of the first area image and the position of the second area image;
or, enhancing the resolution of the target image to obtain the target image with the enhanced resolution;
and inputting the target image with the enhanced resolution into the depth detection neural network, and outputting the blackhead position of the first area image and the position of the second area image.
4. The method of claim 1, wherein the acquiring the first sample image and the second sample image comprises:
acquiring the first sample image under a first light source;
acquiring the second sample image under a second light source, wherein the illumination intensity of the first light source is smaller than that of the second light source.
5. An image detection apparatus, characterized by comprising:
the device comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is used for determining a target image, and the target image is an image with the number of blackheads to be detected;
the first input and output unit is used for inputting the target image to the depth detection neural network and outputting the blackhead position of the first area image and the position of the second area image, and the light reflection degree of the second area image is greater than that of the first area image;
the second input and output unit is used for inputting the second area image into a deep neural network and outputting the number of blackheads in the second area image;
a second determination unit, configured to determine the number of blackheads in the first area image according to the blackhead position of the first area image, and determine the number of blackheads in the target image according to the number of blackheads in the first area image and the number of blackheads in the second area image;
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first sample image and a second sample image, the reflection degree of the second sample image is greater than that of the first sample image, and an object included in the first sample image is the same as that included in the second sample image;
a third determining unit configured to determine the number of blackheads of the first sample image;
and the training unit is used for inputting the second sample image and the number of the blackheads of the first sample image into the deep neural network and training the deep neural network.
6. The apparatus of claim 5, wherein the first input-output unit comprises:
the reducing subunit is used for reducing the resolution of the target image to obtain the target image with the reduced resolution;
a first input/output subunit, configured to input the target image with the reduced resolution to the depth detection neural network, and output a blackhead position of a first area image and a position of a second area image;
or, the enhancement unit is used for enhancing the resolution of the target image to obtain the target image with the enhanced resolution;
and the second input and output subunit is used for inputting the target image with the enhanced resolution into the depth detection neural network and outputting the blackhead position of the first area image and the position of the second area image.
7. An image detection device is characterized by comprising a processor, a memory and an input/output interface, wherein the processor, the memory and the input/output interface are interconnected through lines; wherein the memory stores program instructions which, when executed by the processor, cause the processor to carry out the respective method of claims 1 to 4.
8. A computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions which, when executed by a processor of an image detection apparatus, cause the processor to carry out the method of any one of claims 1 to 4.
CN201811309965.3A 2018-11-05 2018-11-05 Image detection method and device Active CN109544516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811309965.3A CN109544516B (en) 2018-11-05 2018-11-05 Image detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811309965.3A CN109544516B (en) 2018-11-05 2018-11-05 Image detection method and device

Publications (2)

Publication Number Publication Date
CN109544516A CN109544516A (en) 2019-03-29
CN109544516B true CN109544516B (en) 2020-11-13

Family

ID=65846521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811309965.3A Active CN109544516B (en) 2018-11-05 2018-11-05 Image detection method and device

Country Status (1)

Country Link
CN (1) CN109544516B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334229A (en) * 2019-04-30 2019-10-15 王松年 Visual display method, equipment, system and computer readable storage medium
CN110310252A (en) * 2019-04-30 2019-10-08 深圳市四季宏胜科技有限公司 Blackhead absorbs method, apparatus and computer readable storage medium
CN110796115B (en) * 2019-11-08 2022-12-23 厦门美图宜肤科技有限公司 Image detection method and device, electronic equipment and readable storage medium
CN112380962A (en) * 2020-11-11 2021-02-19 成都摘果子科技有限公司 Animal image identification method and system based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106469302A (en) * 2016-09-07 2017-03-01 成都知识视觉科技有限公司 A kind of face skin quality detection method based on artificial neural network
CN107403166A (en) * 2017-08-02 2017-11-28 广东工业大学 A kind of method and apparatus for extracting facial image pore feature
CN107679507A (en) * 2017-10-17 2018-02-09 北京大学第三医院 Facial pores detecting system and method
CN108090450A (en) * 2017-12-20 2018-05-29 深圳和而泰数据资源与云技术有限公司 Face identification method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106469302A (en) * 2016-09-07 2017-03-01 成都知识视觉科技有限公司 A kind of face skin quality detection method based on artificial neural network
CN107403166A (en) * 2017-08-02 2017-11-28 广东工业大学 A kind of method and apparatus for extracting facial image pore feature
CN107679507A (en) * 2017-10-17 2018-02-09 北京大学第三医院 Facial pores detecting system and method
CN108090450A (en) * 2017-12-20 2018-05-29 深圳和而泰数据资源与云技术有限公司 Face identification method and device

Also Published As

Publication number Publication date
CN109544516A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109544516B (en) Image detection method and device
CN106204522B (en) Joint depth estimation and semantic annotation of a single image
US11915514B2 (en) Method and apparatus for detecting facial key points, computer device, and storage medium
CN114550177B (en) Image processing method, text recognition method and device
US10762644B1 (en) Multiple object tracking in video by combining neural networks within a bayesian framework
CN110852257B (en) Method and device for detecting key points of human face and storage medium
US9830703B2 (en) Model-based three-dimensional head pose estimation
US11113571B2 (en) Target object position prediction and motion tracking
CN109829371B (en) Face detection method and device
JP7192143B2 (en) Method and system for object tracking using online learning
CN111177811A (en) Automatic fire point location layout method applied to cloud platform
JP2022185144A (en) Object detection method and training method and device of object detection model
CN114220163B (en) Human body posture estimation method and device, electronic equipment and storage medium
CN113658195B (en) Image segmentation method and device and electronic equipment
CN110728172B (en) Point cloud-based face key point detection method, device and system and storage medium
CN116758280A (en) Target detection method, device, equipment and storage medium
US20230035671A1 (en) Generating stereo-based dense depth images
CN111753625B (en) Pedestrian detection method, device, equipment and medium
CN114463613A (en) Fault detection method and system based on residual error network and Faster R-CNN
CN109726741B (en) Method and device for detecting multiple target objects
CN113610856A (en) Method and device for training image segmentation model and image segmentation
CN112749978A (en) Detection method, apparatus, device, storage medium, and program product
CN113239899B (en) Method for processing image and generating convolution kernel, road side equipment and cloud control platform
CN116434316B (en) Identity recognition method, device, equipment and medium based on X86 industrial control main board
US20240070516A1 (en) Machine learning context based confidence calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518000 Guangdong science and technology innovation and Research Institute, Shenzhen, Shenzhen, Nanshan District No. 6, science and technology innovation and Research Institute, Shenzhen, D 10, 1004, 10

Patentee after: Shenzhen Hetai intelligent home appliance controller Co.,Ltd.

Address before: 518000 Guangdong science and technology innovation and Research Institute, Shenzhen, Shenzhen, Nanshan District No. 6, science and technology innovation and Research Institute, Shenzhen, D 10, 1004, 10

Patentee before: SHENZHEN H&T DATA RESOURCES AND CLOUD TECHNOLOGY Ltd.

CP01 Change in the name or title of a patent holder