CN107341799B - Diagnostic data acquisition method and device - Google Patents

Diagnostic data acquisition method and device Download PDF

Info

Publication number
CN107341799B
CN107341799B CN201710554912.7A CN201710554912A CN107341799B CN 107341799 B CN107341799 B CN 107341799B CN 201710554912 A CN201710554912 A CN 201710554912A CN 107341799 B CN107341799 B CN 107341799B
Authority
CN
China
Prior art keywords
contour
image
target area
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710554912.7A
Other languages
Chinese (zh)
Other versions
CN107341799A (en
Inventor
刘力政
谢晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Seekov Electronic Technology Co ltd
Original Assignee
Shanghai Seekov Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Seekov Electronic Technology Co ltd filed Critical Shanghai Seekov Electronic Technology Co ltd
Priority to CN201710554912.7A priority Critical patent/CN107341799B/en
Publication of CN107341799A publication Critical patent/CN107341799A/en
Application granted granted Critical
Publication of CN107341799B publication Critical patent/CN107341799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a diagnostic data acquisition method and a diagnostic data acquisition device, and in one embodiment, the diagnostic data acquisition method comprises the following steps: acquiring a target area image of a user; performing texture detection on the target area image to obtain texture information in the target area image; performing region division on the target region image according to the texture information to obtain a plurality of identification regions; and comparing and analyzing the plurality of identification areas with pre-stored sample image characteristics to obtain the target area diagnostic data of the user.

Description

Diagnostic data acquisition method and device
Technical Field
The invention relates to the field of image processing, in particular to a diagnostic data acquisition method and device.
Background
Along with the development of computer technology, each field has realized the intellectuality, and the manpower labour that intelligent equipment can significantly reduce improves efficiency of doing things. In the medical field, the pathological information of a patient is generally obtained through experience by a doctor directly watching various parts of the patient for observation, which is inefficient and greatly influenced by subjective factors. Therefore, how to use computer technology to obtain pathological parameters or diagnostic data of a patient is a major topic of current research.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a diagnostic data acquiring method and apparatus.
The diagnostic data acquisition method provided by the embodiment of the invention comprises the following steps:
acquiring a target area image of a user;
performing texture detection on the target area image to obtain texture information in the target area image;
performing region division on the target region image according to the texture information to obtain a plurality of identification regions; and
and comparing and analyzing the plurality of identification areas with the pre-stored sample image characteristics to obtain the target area diagnostic data of the user.
An embodiment of the present invention further provides a diagnostic data acquiring apparatus, where the apparatus includes:
the image acquisition module is used for acquiring a target area image of a user;
the information obtaining module is used for carrying out texture detection on the target area image to obtain texture information in the target area image;
the region obtaining module is used for carrying out region division on the target region image according to the grain information to obtain a plurality of identification regions; and
and the data obtaining module is used for comparing and analyzing the plurality of identification areas with the pre-stored sample image characteristics to obtain the target area diagnostic data of the user.
Compared with the prior art, the diagnostic data acquisition method and the diagnostic data acquisition device in the embodiment of the invention have the advantages that the target area image of the user is acquired, the target area image is analyzed to divide the plurality of identification areas, and different identification areas are further analyzed to obtain the diagnostic data, so that a doctor does not need to watch each part of the body of the user to obtain the diagnostic data, and the data acquisition efficiency can be greatly improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic terminal according to a preferred embodiment of the present invention.
FIG. 2 is a flowchart of a diagnostic data acquisition method according to a preferred embodiment of the present invention.
Fig. 3 is a detailed flowchart of step S102 in the diagnostic data acquisition method according to the preferred embodiment of the invention.
Fig. 4 is a detailed flowchart of step S1022 in the diagnostic data acquiring method according to the preferred embodiment of the present invention.
Fig. 5 is a detailed flowchart of step S104 in the diagnostic data acquisition method according to the preferred embodiment of the invention.
Fig. 6 is a functional block diagram of a diagnostic data acquisition apparatus according to a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
As shown in fig. 1, is a block diagram of an electronic terminal 100. The electronic terminal 100 includes a diagnostic data acquisition device 110, a memory 111, a storage controller 112, a processor 113, a peripheral interface 114, an input/output unit 115, a display unit 116, and an imaging unit 117. It is to be understood that the structure of the electronic terminal 100 shown in fig. 1 is merely illustrative and is not intended to limit the structure of the electronic terminal 100. For example, the electronic terminal 100 may also include more or fewer components than shown in the figures, or have a different configuration than shown in FIG. 1.
The memory 111, the memory controller 112, the processor 113, the peripheral interface 114, the input/output unit 115, the display unit 116, and the camera unit 117 are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The diagnostic data acquisition means 110 includes at least one software functional module which can be stored in the memory 111 in the form of software or firmware (firmware) or is solidified in an Operating System (OS) of the electronic terminal 100. The processor 113 is configured to execute an executable module stored in the memory, such as a software functional module or a computer program included in the diagnostic data acquisition apparatus 110.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is configured to store a program, and the processor 113 executes the program after receiving an execution instruction, and the method executed by the electronic terminal 100 defined by the process disclosed in any embodiment of the present invention may be applied to the processor 113, or implemented by the processor 113.
The processor 113 may be an integrated circuit chip having signal processing capabilities. The Processor 113 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 114 couples various input/output devices to the processor 113 and memory 111. In some embodiments, the peripheral interface 114, the processor 113, and the memory controller 112 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 115 is used to provide input data to a user. The input/output unit 115 may be, but is not limited to, a mouse, a keyboard, and the like. The audio unit provides an audio interface to the user, which may include one or more microphones, one or more speakers, and audio circuitry.
The display unit 116 provides an interactive interface (e.g., a user operation interface) between the electronic terminal 100 and a user or is used to display image data to a user reference. In this embodiment, the display unit 116 may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing.
The camera unit 117 is used to take a picture or a video. The pictures or videos taken may be stored in the memory 111. The camera unit 117 may specifically include a lens module, an image sensor, and a flash. The lens module is used for imaging the shot target and mapping the imaged image to the image sensor. The image sensor is used for receiving light rays from the lens module to realize sensitization so as to record image information. Specifically, the image sensor may be implemented based on a Complementary Metal Oxide Semiconductor (CMOS), a Charge-coupled Device (CCD), or other image sensing principles. The flash is used for exposure compensation at the time of shooting. Generally, the flash used in the electronic terminal 100 may be a Light Emitting Diode (LED) flash.
Referring to fig. 2, a flowchart of a diagnostic data obtaining method applied to the electronic terminal shown in fig. 1 according to a preferred embodiment of the invention is shown. The specific process shown in fig. 2 will be described in detail below.
Step S101, acquiring a target area image of a user.
In this embodiment, the target area may be a tongue, a palm, a sole, or the like of the user. The following description will take the acquisition of an image of a user's hand as an example.
In this embodiment, the step S101 may include: acquiring an original image of the target area through a camera unit of the electronic terminal; and processing the original image to obtain the target area image.
In one embodiment, the electronic terminal may process the original image through a Retinex algorithm to obtain the target area image. The Retinex theory mainly includes two aspects: firstly, the color of the object is determined by the reflection capability of the object to long-wave, medium-wave and short-wave light rays, but not by the absolute value of the intensity of the reflected light; secondly, the color of the object is not affected by illumination nonuniformity and has consistency. Human eyes perceive the brightness of an object according to the Retinex theory depending on the illumination of the environment and the reflection of the illumination light by the surface of the object. The first mathematical expression of the Retinex algorithm is as follows: i (x, y) ═ L (x, y) × R (x, y).
Wherein: i (x, y) represents the above-described original image data; l (x, y) represents an illumination component of ambient light; r (x, y) represents the reflection component of the target object carrying image detail information, i.e. the target area image data obtained after processing.
By transforming the first expression, for example, taking logarithms of two sides of the first expression, a second expression can be obtained: log [ R (x, y) ] -log [ I (x, y) ] -log [ L (x, y) ];
and calculating corresponding target area image data R (x, y) according to the original image data I (x, y), wherein R (x, y) can be regarded as target area image data obtained after the original image data is processed. In one embodiment, the L (x, y) may be obtained by performing gaussian blurring on the image data I (x, y).
Specifically, the L (x, y) can be calculated in the following manner.
First, the scale of the original image data I (x, y) is input.
Then, an image obtained by blurring the original image in a specified scale is calculated to obtain an illumination component L (x, y) of the ambient light.
In this embodiment, the original image is subjected to gaussian blur of each scale to obtain a blurred image L of multiple scalesi(x, y), wherein the subscript i represents the number of feet.
And accumulating under each scale to obtain a third expression:
log[R(x,y)]=log[R(x,y)]+Weight(i)*(log[Ii(x,y)]-log[Li(x,y)]);
wherein weight (i) represents the corresponding weight in each scale, in this embodiment, the sum of the weights of each scale of division is 1, and in one example, the weight of each scale is equal.
Log [ R (x, y)]Quantized to pixel values in the range of 0 to 255 as the final output. The quantization mode adopts the calculation of log [ R (x, y)]Max and Min, howeverThen, performing linear quantization on each Value to obtain an expression four of the target area image data R (x, y) obtained after the processing, wherein the expression four is as follows:
Figure BDA0001345444870000071
the obtained original image is processed in the above mode to obtain the target area image, so that the obtained target area image is less influenced by external factors such as dynamic range, edge, color and the like, and the result obtained by subsequent image processing can be more real and reliable.
Step S102, performing texture detection on the target area image to obtain texture information in the target area image.
For example, when the target region is a palm of a user, the texture information may be a palm texture position, a palm texture gray value, and the like in the target region image.
And step S103, carrying out region division on the target region image according to the texture information to obtain a plurality of identification regions.
In one example, the target area image is divided into eight identification areas, a star dome, a sun dome, a saturn, a samsung, a first marshal, a second marshal, a moon dome, and a venus dome.
And step S104, comparing and analyzing the plurality of identification areas with pre-stored sample image characteristics to obtain the target area diagnostic data of the user.
In this embodiment, as shown in fig. 3, the step S102 may include the following steps.
Step S1021, performing edge detection and image analysis on the target area image to obtain first texture information of the target area.
In this embodiment, the first texture information may include texture information of several lines with clear palm textures.
In this embodiment, the electronic terminal may obtain an edge position in the target area image to obtain a palm contour. And then, the electronic terminal determines the first texture information according to the color of the image in the target area image and the distance between the image and the edge of the palm.
Step S1022, performing image search on the target region image to obtain second texture information in the target region image.
And the first line information and the second line information form the line information.
In this embodiment, the second texture information may be a profile of the palm
In this embodiment, step S1022 may include: and searching the coordinates and the gray value of each point in the target area image in a chain code mode to determine the second line information, wherein the second line information comprises the coordinates and the gray value of each point in the line.
A specific implementation manner of finding the second texture information of the target region image in the chain code manner is described in detail below.
In one embodiment, the direction of the chain code is defined by the direction of the center point pointing to its 8 neighbors. In one example, the origin of coordinates may be set to the upper left corner of the display interface of the electronic terminal, where the x-axis is from left to right and the Y-axis is from top to bottom. When the code value is increased by 1, the direction is rotated clockwise by 45 degrees, the code value is increased, and the direction is rotated clockwise. Since there are 8 neighbors per center point, the chain code values in this embodiment are only 8, and the chain code values can be represented by eight values, 0-7, respectively. In this embodiment, if the gray value of the neighboring point needs to be obtained, the coordinates of the neighboring point need to be obtained first, and the coordinates of the neighboring point can be obtained by adding the coordinates of the central point of the neighboring point and the coordinate offset between the neighboring point and the central point. In one example, the chain code value being zero represents a point where the center point is offset by x positive directions by one unit, and then rotated clockwise to get successively increasing chain code value corresponding to adjacent point positions.
First, the coordinates of a boundary point as the starting point a0 are obtained using the vertical scanning method.
And a, starting from the starting point, entering the direction of the chain code value of 0 when the starting point is reached, scanning each adjacent point clockwise by using the direction of the direction chain code value of 5 to search the next point, recording the chain code value and the coordinate of the next point, and finding the adjacent point A1 which is the same as or close to the gray value of the starting point. Of course, in other examples, each neighbor may also be scanned in a clockwise direction starting with a link value of 4.
And b, starting to scan each adjacent point clockwise by using the direction of the entering chain code value of the current point Ai as 5, finding the next boundary point Ai +1, recording the direction chain code value and the coordinate of the next obtained boundary point, storing the direction chain code value and the coordinate in an array, and recording the number of chain codes.
And repeating the step b until the found boundary point An is the starting point A0, or the coordinates of the found boundary point An are coincident with the coordinates of the previous point An-1, and ending the scanning. In this embodiment, when the boundary point An is the starting point a0, the second texture is a closed texture; and when the coordinates of the boundary point An and the former one coincide with An-1, the second texture is a line texture.
Where the arrays a0, a1, a2 … An represent the points in the second pass. The second texture information comprises information such as coordinates and gray values of all points in the arrays A0, A1 and A2 … An.
And further determining second texture information of the target area image in the chain code mode, so that the reliability of image texture identification can be improved.
In this embodiment, as shown in fig. 4, step S1022 may include the following steps.
Step S10221, performing edge detection on the target area image to obtain an edge image.
Step S10222, calculating a distance potential energy field based on Euclidean distance transformation for the edge image, and taking the distance potential energy field as an external force field.
Step S10223, setting an initial contour of an active contour model on the external force field, and performing deformation based on the active contour model to obtain a pre-updated contour.
Step S10224, determining whether there is a false contour in the pre-updated contours.
The false contour is a contour formed by points of which any point in the pre-updated contour does not meet the force field distribution rule in the field in the specified range of the point.
If the pre-updated contour has a false contour, step S10225 is performed. And if the pre-updated contour does not have a false contour, the pre-updated contour is the target contour.
In this embodiment, the false contour is generally divided into two types, one type is a continuous false contour point, and the other type is a discrete false contour point. The continuous false contour refers to four or more false contour points connected according to contour points of the pre-updated contour, and the discrete false contour refers to three or less false contour points connected according to contour points of the pre-updated contour.
Step S10225, searching for corresponding contour points of all false contours on the pre-updated contour.
In one embodiment, a contour point in any false contour in the pre-updated contours is selected and used as an initial search point, a search is performed in a direction perpendicular to the continuous false contour points, and the perpendicular directions perpendicular to the false contour points are respectively denoted as direction 1 and direction 2. Searching along two directions, if the new point is the same as the searching direction, the new point is also a false contour point, and the like searches the points in the pre-updated contour to find out the false contour.
Step S10226, merging the false contour in the pre-updated contour into a contour point corresponding to the false contour to obtain a target contour, where the target contour forms a second texture, and information on the target contour is the second texture information.
In this embodiment, a corresponding continuous false contour point is calculated to obtain a corresponding contour point, and the continuous false contour points are all merged into the calculated corresponding contour points. For discrete false contour points, each point corresponds to a contour point, and then a contour point corresponding to each false contour point needs to be obtained, and each false contour point is merged into the contour point corresponding to the false contour point to obtain the target contour.
And determining the target contour according to the active contour model to obtain the second texture information, so that the reliability of the identification of the image texture can be improved.
In the present embodiment, as shown in fig. 5, step S104 includes the following steps.
Step S1041, training the corresponding recognition regions of the pre-stored sample image based on the markov models respectively to obtain the parameter sets based on the markov models, the number of which is the same as the number of the recognition regions.
In this embodiment, the pre-stored sample images may include a plurality of sample images, for example, a corresponding sample image may be pre-stored for each disease condition; as another example, a corresponding sample image may be pre-stored for a condition corresponding to each identified region.
Wherein the Markov model is a dual random process comprising a Markov (Markov) chain X ═ Xi,T+1>t>0 and an observation random process Y ═ Yi,T+1>t>0}. In this embodiment, the parameter set is obtained by performing calculation using a hidden markov model. Among them, Hidden Markov Model (HMM) is a quintuple: (Ω Q, Ω O, A, B, π), wherein:
Ω Q represents a finite set of states;
Ω O represents a finite set of observations;
a represents transition probability;
b represents the emission probability;
and pi represents the initial state distribution.
Where λ ═ { a, B, pi } is a parameter of a given hidden markov model, let O ═ O1.
Firstly, a sample set of any pre-stored sample image is obtained as { x1,x2,...xnAnd adjusting the cluster center as follows:
calculating a mean of all features in the sample set
Figure BDA0001345444870000111
Sum mean square error SiGenerating 2n +1 initial condensation points:
Figure BDA0001345444870000112
and
Figure BDA0001345444870000113
classifying and adjusting the initial condensation points according to a K mean value method, and if the results before and after classification and adjustment are the same or the times of classification and adjustment reach the specified times, terminating the clustering; otherwise the following adjustments are made.
In this embodiment, it is first determined that a certain type of sample point in the classification of each type point in the classification is smaller than the minimum sample point number, and the type is deleted, and the type-in point does not participate in subsequent adjustment.
Secondly, if the current classification has m classes in total, the average value of the maximum mean square deviations of the classes is
Figure BDA0001345444870000114
Wherein, order:
Figure BDA0001345444870000115
for any class, the mean square deviations of all classes are considered, and the maximum mean square deviation S is found outiIf S isi>k. Then decomposing:
Figure BDA0001345444870000116
secondly, if the current classification has m classes in total, the average value of the minimum mean square deviations of all classes is
Figure BDA0001345444870000117
Wherein, order:
Figure BDA0001345444870000118
and if the distance between the condensation points of any two classes in the current classification is less than t, merging the two classes, and taking the new central point after merging as the condensation point of the new class.
In this embodiment, an array formed by the aggregation points obtained after the clustering process is used as an observation value sequence O '═ O' 1. A Baum-Welch algorithm is used to find the local optimum to obtain the parameter set λ ═ (a, B, pi).
Step S1042, obtaining a feature sequence of each identification region in the target region image, and determining that the feature sequence is an observation value.
Step S1043, placing the observation value into a markov model corresponding to a parameter set including an identification region corresponding to the observation value, and calculating to obtain a calculation result, and obtaining the target region according to the calculation result to obtain the diagnostic data.
And in the image hand diagnosis recognition process, the trained hidden Markov model parameters obtained by the calculation are used for taking the extracted hand local image characteristics as an observation value, and a path with the maximum probability can be determined by using a Viterbi algorithm. Known as λ ═ (a, B, pi), and observed values in the target region: o-1, …, oT, and the calculation result Q-Q1 … qT is obtained so that the probability p (O | λ) is maximum, that is: q ═ argmaxp (Q | O, λ).
In this embodiment, the diagnostic data is obtained by obtaining the target region from the calculation result Q — Q1 … qT obtained by the calculation. The diagnostic data may include a possible condition of the user, etc.
According to the diagnostic data acquisition method in the embodiment of the invention, the target area image of the user is acquired, the target area image is analyzed and divided into the plurality of identification areas, and different identification areas are further analyzed to obtain the diagnostic data, so that a doctor does not need to watch each part of the body of the user to obtain the diagnostic data, and the data acquisition efficiency can be greatly improved.
Please refer to fig. 6, which is a functional block diagram of the diagnostic data acquiring apparatus 110 shown in fig. 1 according to a preferred embodiment of the present invention. The modules, units and sub-units in the device of the embodiment are used for executing the steps in the method. The diagnostic data acquisition device comprises an image acquisition module 1101, an information acquisition module 1102, a region acquisition module 1103 and a data acquisition module 1104.
The image acquiring module 1101 is configured to acquire a target area image of a user.
The information obtaining module 1102 is configured to perform texture detection on the target region image to obtain texture information in the target region image.
The region obtaining module 1103 is configured to perform region division on the target region image according to the texture information to obtain a plurality of identification regions.
The data obtaining module 1104 is configured to compare and analyze the plurality of identification areas with pre-stored sample image features to obtain target area diagnostic data of the user.
The information obtaining module 1102 includes: a first obtaining unit and a second obtaining unit.
The first obtaining unit is used for carrying out edge detection and image analysis on the target area image to obtain first texture information of the target area.
The second obtaining unit is used for carrying out image search on the target area image to obtain second grain information in the target area image; and the first line information and the second line information form the line information.
In this embodiment, the second obtaining unit is further configured to find coordinates and a gray value of each point in the target region image in a chain code manner to determine the second texture information, where the second texture information includes the coordinates and the gray value of each point in the texture.
In this embodiment, the second obtaining unit further includes: the device comprises a detection subunit, a calculation subunit, a deformation subunit, a judgment subunit, a search subunit and an incorporation subunit.
The detection subunit is configured to perform edge detection on the target area image to obtain an edge image.
And the calculation subunit is used for calculating a distance potential energy force field based on Euclidean distance transformation on the edge image and taking the distance potential energy force field as an external force field.
The deformation subunit is configured to set an initial contour of the active contour model on the external force field, and perform deformation based on the active contour model to obtain a pre-updated contour.
The judging subunit is configured to judge whether a false contour exists in the pre-updated contour, where the false contour is a contour formed by points in the pre-updated contour, where any point does not satisfy a force field distribution rule in a field in a specified range where the point is located
And the searching subunit is configured to search, when there is a false contour in the pre-updated contour, contour points corresponding to all the false contours on the pre-updated contour.
The merging subunit is configured to merge a false contour in the pre-updated contour into a contour point corresponding to the false contour to obtain a target contour, where the target contour forms a second texture, and information on the target contour is the second texture information.
In this embodiment, the data obtaining module 1104 includes: parameter obtaining unit, observation value obtaining unit, and data obtaining unit
The parameter obtaining unit is used for training the corresponding recognition regions of the pre-stored sample image respectively based on the Markov model to obtain the parameter sets based on the Markov model, and the number of the parameter sets is the same as that of the recognition regions.
The observation value obtaining unit is used for obtaining a feature sequence of each identification area in the target area image and determining the feature sequence as an observation value.
The data obtaining unit is configured to place the observation value in a markov model corresponding to a parameter group including an identification region corresponding to the observation value, perform calculation to obtain a calculation result, and obtain the target region according to the calculation result to obtain the diagnostic data.
For other details of the present embodiment, reference may be further made to the description of the above method embodiments, which are not repeated herein.
The diagnostic data acquisition device in the embodiment of the invention can greatly improve the data acquisition efficiency by acquiring the target area image of the user, analyzing the target area image to divide a plurality of identification areas and further analyzing different identification areas to obtain the diagnostic data without the need of a doctor to watch each part of the body of the user to obtain the diagnostic data.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (4)

1. A diagnostic data acquisition apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a target area image of a user;
the information obtaining module is used for carrying out texture detection on the target area image to obtain texture information in the target area image;
the region obtaining module is used for carrying out region division on the target region image according to the grain information to obtain a plurality of identification regions, and dividing the target region image into eight identification regions, namely a star dome, a sun dome, a earth star dome, a wooden star, a first Mars dome, a second Mars dome, a moon dome and a golden star dome; and
the data obtaining module is used for comparing and analyzing the plurality of identification areas with pre-stored sample image characteristics to obtain target area diagnostic data of the user;
the data obtaining module comprises:
the parameter obtaining unit is used for training corresponding recognition regions of a pre-stored sample image respectively based on a Markov model to obtain parameter groups which are the same as the recognition regions in number and are based on the Markov model;
an observed value obtaining unit, configured to obtain a feature sequence of each identification region in the target region image, and determine the feature sequence as an observed value;
and the data obtaining unit is used for putting the observation value into a Markov model corresponding to the parameter group of the identification region corresponding to the observation value to calculate to obtain a calculation result, and obtaining the target region according to the calculation result to obtain the diagnosis data.
2. The diagnostic data acquisition device as set forth in claim 1, wherein the information obtaining module comprises:
the first obtaining unit is used for carrying out edge detection and image analysis on the target area image to obtain first texture information of the target area;
the second obtaining unit is used for carrying out image search on the target area image to obtain second grain information in the target area image; and the first line information and the second line information form the line information.
3. The diagnostic data acquisition apparatus according to claim 2, wherein the second obtaining unit is further configured to find coordinates and a gray scale value of each point in the target region image by a chain code method to determine the second texture information, and the second texture information includes the coordinates and the gray scale value of each point in a texture.
4. The diagnostic data acquisition apparatus as set forth in claim 2, wherein the second obtaining unit further comprises:
the detection subunit is used for carrying out edge detection on the target area image so as to obtain an edge image;
the calculation subunit is used for calculating a distance potential energy force field based on Euclidean distance transformation on the edge image and taking the distance potential energy force field as an external force field;
the deformation subunit is used for setting an initial contour of the active contour model on the external force field, and performing deformation based on the active contour model to obtain a pre-updated contour;
the judging subunit is used for judging whether a false contour exists in the pre-updated contour, wherein the false contour is a contour formed by points, of which any point does not meet the force field distribution rule in the field in the specified range where the point is located, in the pre-updated contour;
the searching subunit is used for searching corresponding contour points of all the false contours on the pre-updated contour when the pre-updated contour has the false contour;
and the merging subunit is used for merging the false contour in the pre-updated contour into the contour point corresponding to the false contour to obtain a target contour, wherein the target contour forms a second texture, and the information on the target contour is the information of the second texture.
CN201710554912.7A 2017-07-10 2017-07-10 Diagnostic data acquisition method and device Active CN107341799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710554912.7A CN107341799B (en) 2017-07-10 2017-07-10 Diagnostic data acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710554912.7A CN107341799B (en) 2017-07-10 2017-07-10 Diagnostic data acquisition method and device

Publications (2)

Publication Number Publication Date
CN107341799A CN107341799A (en) 2017-11-10
CN107341799B true CN107341799B (en) 2020-06-23

Family

ID=60218670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710554912.7A Active CN107341799B (en) 2017-07-10 2017-07-10 Diagnostic data acquisition method and device

Country Status (1)

Country Link
CN (1) CN107341799B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635696A (en) * 2018-12-04 2019-04-16 上海掌门科技有限公司 Biological information detection method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003038461A (en) * 2001-07-13 2003-02-12 Ritsushin Chin Automatic palm print diagnostic device characterized in that image of palm is analyzed by computer and disease can be diagnosed from the analysis result
JP2006067486A (en) * 2004-08-30 2006-03-09 Unbalance Corp Palm reading system and palm reading server
CN101264010A (en) * 2008-04-01 2008-09-17 许君德 Holography palm print qi-blood medical detecting method and detecting equipment thereby
CN101732053A (en) * 2009-11-27 2010-06-16 候万春 System and method for health analysis through electronic palm prints or electronic face prints
CN101751514A (en) * 2009-12-10 2010-06-23 深圳华为通信技术有限公司 Method and terminal for obtaining health information
CN101919685A (en) * 2010-04-30 2010-12-22 广州中医药大学 Tongue diagnosis intelligent control and diagnosis system and diagnosis method thereof
CN103679724A (en) * 2013-12-13 2014-03-26 中南大学 Slope approximant straight line detection method
CN104751147A (en) * 2015-04-16 2015-07-01 成都汇智远景科技有限公司 Image recognition method
CN105147247A (en) * 2015-07-31 2015-12-16 广东欧珀移动通信有限公司 User health recognition method and mobile terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003038461A (en) * 2001-07-13 2003-02-12 Ritsushin Chin Automatic palm print diagnostic device characterized in that image of palm is analyzed by computer and disease can be diagnosed from the analysis result
JP2006067486A (en) * 2004-08-30 2006-03-09 Unbalance Corp Palm reading system and palm reading server
CN101264010A (en) * 2008-04-01 2008-09-17 许君德 Holography palm print qi-blood medical detecting method and detecting equipment thereby
CN101732053A (en) * 2009-11-27 2010-06-16 候万春 System and method for health analysis through electronic palm prints or electronic face prints
CN101751514A (en) * 2009-12-10 2010-06-23 深圳华为通信技术有限公司 Method and terminal for obtaining health information
CN101919685A (en) * 2010-04-30 2010-12-22 广州中医药大学 Tongue diagnosis intelligent control and diagnosis system and diagnosis method thereof
CN103679724A (en) * 2013-12-13 2014-03-26 中南大学 Slope approximant straight line detection method
CN104751147A (en) * 2015-04-16 2015-07-01 成都汇智远景科技有限公司 Image recognition method
CN105147247A (en) * 2015-07-31 2015-12-16 广东欧珀移动通信有限公司 User health recognition method and mobile terminal

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Application of Digital Image Processing and Analysis in Healthcare Based on Medical Palmistry;Hardik Pandit 等;《International Conference on Intelligent Systems and Data Processing (ICISD) 2011》;20111231;56-59 *
一种快速的手掌轮廓特征点提取方法;金璟璇;《武汉理工大学学报》;20101215;第32卷(第23期);154-156,178 *
基于力场分析的主动轮廓模型;侯志强 等;《计算机学报》;20040612;第27卷(第6期);第2.1节 *
基于奇异值分解和隐马尔可夫模型的掌纹识别技术研究;顾芳;《中国优秀硕士学位论文 全文数据库 信息科技辑》;20111215;正文第6.3节,图6-3 *
基于掌纹诊病的掌纹定位分割算法研究;门阔;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130815;正文第1.4节,第4章第1段,第4.1-4.2节,图1.8 *
门阔.基于掌纹诊病的掌纹定位分割算法研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2013, *

Also Published As

Publication number Publication date
CN107341799A (en) 2017-11-10

Similar Documents

Publication Publication Date Title
CN108875522B (en) Face clustering method, device and system and storage medium
CN108304829B (en) Face recognition method, device and system
JP7094702B2 (en) Image processing device and its method, program
CN108230383B (en) Hand three-dimensional data determination method and device and electronic equipment
US11315281B2 (en) Pupil positioning method and apparatus, VR/AR apparatus and computer readable medium
US20200250428A1 (en) Shadow and cloud masking for remote sensing images in agriculture applications using a multilayer perceptron
CN109815770B (en) Two-dimensional code detection method, device and system
CN109376631B (en) Loop detection method and device based on neural network
US9363499B2 (en) Method, electronic device and medium for adjusting depth values
US10216979B2 (en) Image processing apparatus, image processing method, and storage medium to detect parts of an object
US20170132456A1 (en) Enhanced face detection using depth information
JP6630633B2 (en) Semi-supervised method for training multiple pattern recognition and registration tool models
US20190147225A1 (en) Image processing apparatus and method
CN108875517B (en) Video processing method, device and system and storage medium
CN109640066B (en) Method and device for generating high-precision dense depth image
CN110807427B (en) Sight tracking method and device, computer equipment and storage medium
KR102434703B1 (en) Method of processing biometric image and apparatus including the same
JP2015527625A (en) physical measurement
JP6684475B2 (en) Image processing apparatus, image processing method and program
JP7334141B2 (en) Multimodal Dense Correspondence Image Processing System, Radar Imaging System, Method and Program
CN105225222B (en) Automatic assessment of perceptual visual quality of different image sets
CN106524909B (en) Three-dimensional image acquisition method and device
CN112949440A (en) Method for extracting gait features of pedestrian, gait recognition method and system
CN107145741B (en) Ear diagnosis data acquisition method and device based on image analysis
JP2014021602A (en) Image processor and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant