CN111050086B - Image processing method, device and equipment - Google Patents

Image processing method, device and equipment Download PDF

Info

Publication number
CN111050086B
CN111050086B CN201911315899.5A CN201911315899A CN111050086B CN 111050086 B CN111050086 B CN 111050086B CN 201911315899 A CN201911315899 A CN 201911315899A CN 111050086 B CN111050086 B CN 111050086B
Authority
CN
China
Prior art keywords
target
area
weight
region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911315899.5A
Other languages
Chinese (zh)
Other versions
CN111050086A (en
Inventor
王春
刘欣
杨忠
曹幸静
韦佩兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jinshan Science and Technology Group Co Ltd
Original Assignee
Chongqing Jinshan Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jinshan Medical Technology Research Institute Co Ltd filed Critical Chongqing Jinshan Medical Technology Research Institute Co Ltd
Priority to CN201911315899.5A priority Critical patent/CN111050086B/en
Publication of CN111050086A publication Critical patent/CN111050086A/en
Application granted granted Critical
Publication of CN111050086B publication Critical patent/CN111050086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, image processing equipment and a storage medium, wherein the method comprises the following steps: acquiring a target image, and carrying out object detection on the target image; if the target image has the target object, acquiring attribute information of the target object, and dividing the target image into a target area and at least one reference area according to the attribute information; acquiring a target weight of a target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area; and weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area. The embodiment of the invention can better calculate the brightness value of the image and improve the reference value of the brightness value.

Description

Image processing method, device and equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a computer storage medium.
Background
The image is a similar and vivid description or portrayal of an objective object and is the most common information carrier in human social activities; in a broad sense, an image is a picture with all visual effects. Any image has a luminance value, which can be used to reflect the brightness level of the image. If the brightness value of the image is low, the image is dark, namely the image quality of the image is poor; at this time, the exposure strategy of the image is adjusted to improve the image quality of the subsequently acquired new image. Therefore, the brightness value of the image is an important reference factor for adjusting the exposure strategy; how to better calculate the brightness value of the image becomes a research hotspot.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, image processing equipment and a storage medium, which can better calculate the brightness value of an image and improve the reference value of the brightness value.
In one aspect, an embodiment of the present invention provides an image processing method, where the image processing method includes:
acquiring a target image, and carrying out object detection on the target image;
if the target image has the target object, acquiring attribute information of the target object, and dividing the target image into a target area and at least one reference area according to the attribute information, wherein the target area comprises the target object, and the reference area does not comprise the target object;
acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight;
and weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
In another aspect, an embodiment of the present invention provides an image processing apparatus, including:
the processing unit is used for acquiring a target image and carrying out object detection on the target image;
the processing unit is configured to, if a target object exists in the target image, acquire attribute information of the target object, and divide the target image into a target area and at least one reference area according to the attribute information, where the target area includes the target object and the reference area does not include the target object;
the processing unit is used for acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight;
and the weighting unit is used for weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
In another aspect, an embodiment of the present invention provides an image processing apparatus, where the image processing apparatus includes an input interface and an output interface, and the image processing apparatus further includes:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of:
acquiring a target image, and carrying out object detection on the target image;
if the target image has the target object, acquiring attribute information of the target object, and dividing the target image into a target area and at least one reference area according to the attribute information, wherein the target area comprises the target object, and the reference area does not comprise the target object;
acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight;
and weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
In yet another aspect, an embodiment of the present invention provides a computer storage medium, where one or more instructions are stored, and the one or more instructions are adapted to be loaded by a processor and execute the following steps:
acquiring a target image, and carrying out object detection on the target image;
if the target image has the target object, acquiring attribute information of the target object, and dividing the target image into a target area and at least one reference area according to the attribute information, wherein the target area comprises the target object, and the reference area does not comprise the target object;
acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight;
and weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
After the target image is acquired, the object detection can be performed on the target image. If the target image has the target object, the attribute information of the target object can be acquired; dividing the target image into a target area and at least one reference area according to the attribute information; the target image is subjected to region division through the attribute information of the target object, so that the accuracy of region division and the difference among regions can be improved. Then, a target brightness value of the target image can be obtained by weighted summation according to the brightness value of the target area and the target weight of the target area, and the brightness value of each reference area and the reference weight of each reference area. Because the target weight is greater than the reference weight, the influence degree of the brightness value of the target area on the overall target brightness value can be improved, and the reference value of the target brightness value is improved; therefore, when the exposure strategy is adjusted according to the target brightness value, the exposure strategy can be adjusted according to the brightness value of the target area, and the image quality of the target area is further ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a schematic structural diagram of an endoscope provided by an embodiment of the present invention;
FIG. 1b is a schematic structural view of another endoscope provided by embodiments of the present invention;
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of a method for determining a target area according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of the division of the remaining area according to the embodiment of the present invention;
FIG. 4 is a flowchart illustrating an image processing method according to another embodiment of the present invention;
FIG. 5a is a diagram illustrating image partitioning according to another embodiment of the present invention;
fig. 5b is an application scene diagram of an image processing method according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Image processing is a technique that uses a computer to analyze an image to achieve a desired result; in order to better calculate the brightness value of an image, an embodiment of the present invention proposes an image processing scheme. The image processing scheme may be performed by an image processing device, where an image processing device refers to a device configured with an image sensor; the image sensor is a device which can convert the light image on the light sensing surface into an electric signal in corresponding proportion to the light image so as to realize image acquisition; it may be specifically a CMOS (Complementary Metal-Oxide-Semiconductor) sensor or a CCD sensor (charge coupled device image sensor); for convenience of illustration, the image sensor is referred to as a CMOS sensor. Specifically, the image processing apparatus may be any one of: an endoscope, a tachograph, a digital camera, a video camera, or a smartphone, among others. The endoscope is a medical electronic optical instrument which can enter a human body cavity and an internal organ cavity to carry out direct observation, diagnosis and treatment by integrating techniques of light collection, machine, electricity and the like; it may in particular be a capsule endoscope or a tube endoscope. The capsule endoscope is an endoscope made into a capsule shape, and can specifically comprise an image sensor, a data transmission and control module and a radio frequency communication module, as shown in fig. 1 a; tubular endoscopes refer to endoscopes that are built into thin, flexible tubes, as shown in FIG. 1 b.
In a specific implementation, the general principle of the image processing scheme is as follows: after the target image is acquired, the target image may be first divided into a plurality of regions. Secondly, determining the weight of each region according to the attention degree of each region, wherein the attention degree and the weight are in positive correlation; the attention degree here is used to reflect the attention degree of the user to the image information included in the area. The attention degree of the regions can be preset according to experience values or actual business requirements, and can also be determined in real time according to image information contained in each region; for example, if a certain region includes a target object, the attention of the region is higher than that of other regions. Then, the target brightness value of the target image can be obtained by weighting and summing according to the brightness value of each region and the corresponding weight. Optionally, after the target brightness value is obtained, the exposure strategy of the image may be adjusted according to the target brightness value, so as to improve the image quality of a new image acquired subsequently. Therefore, according to the embodiments of the present invention, the weight of each region is determined according to the attention degree, so that the influence of the luminance value (i.e., the exposure state) of the region with higher attention degree on the target luminance value of the entire target image is large, and the influence of the luminance value (i.e., the exposure state) of the region with lower attention degree on the target luminance value of the entire target image is small. Therefore, even if the target image has the condition that the area with higher attention is too dark and the area with lower attention is too bright, the target brightness value of the whole target image is still lower than the threshold value, so that the adjustment of the exposure strategy can still be triggered; further, the image quality of new images is improved, and the display of the area with high attention is guaranteed not to be too dark.
Based on the description of the image processing scheme described above, an embodiment of the present invention proposes an image processing method that can be executed by the above-mentioned image processing apparatus. Referring to fig. 2, the image processing method may include the following steps S201 to S204:
s201, acquiring a target image, and carrying out object detection on the target image.
The target image here may be any image such as a medical image, a face image, a landscape image, a video image, or the like. After the target image is acquired, the image processing device may perform object detection on the target image by using the trained neural network model to detect whether the target object exists in the target image. The target object herein refers to an object of interest to the user, which may include an abnormal object or a normal object. The abnormal object refers to an object having abnormal features in the target image due to an abnormal fault, such as a lesion having disease features in the medical image due to a lesion, a device component having fault features in the device image due to a fault, a vehicle component having fault features in the vehicle image due to a fault, and the like. In contrast, a normal object refers to an object that does not cause abnormal features in the target image, such as "puppy", "flower and grass" in the target image.
S202, if the target image has the target object, acquiring attribute information of the target object, and dividing the target image into a target area and at least one reference area according to the attribute information.
If the target image has the target object, the trained neural network model can be called to obtain the attribute information of the target object; the attribute information of the target object may include: center coordinates and size information of the target object. After the attribute information of the target object is acquired, dividing the target image into a target area and at least one reference area according to the attribute information; wherein the target area includes the target object and the reference area does not include the target object. In a specific implementation, a target area can be determined from a target image according to the central coordinate and the size information of a target object; the center point of the target area is the target center point indicated by the center coordinates, and the area of the target area is larger than or equal to the size indicated by the size information. For example, if the center coordinates of the target center point of the target object in the target image are (2, 2) and the size information is a circle with a radius of 2, the target area determined from the target image according to the attribute information of the target object can be referred to as shown in fig. 3 a. After the target area is determined, remaining areas of the target image other than the target area may be determined, and pixel coordinates of each of the remaining pixels located within the remaining areas may be acquired. The remaining area may then be divided into at least one reference area according to the pixel coordinates of the respective remaining pixels.
Wherein the target image may have a plurality of region division ranges; a specific embodiment of dividing the remaining area into at least one reference area according to the pixel coordinates of the respective pixels may be: calculating the distance between each residual pixel and the target central point according to the pixel coordinate and the central coordinate of each residual pixel; and taking the region formed by the residual pixels corresponding to the distances falling into the same region dividing range as a reference region to divide the residual region into at least one reference region. For example, assume that the target image has 3 region division ranges, which are [0, 6), [6, 10), and [10, ∞) respectively. If the distances between the pixels 1-100 and the target center point all fall within the region dividing range [0, 6 ], the region formed by the pixels 1-100 can be used as a reference region 1. Similarly, if the distances between the pixels 2 to 200 and the target center point all fall into the region division range [6, 10 ], the region formed by the pixels 2 to 200 can be used as another reference region 2, and so on; the remaining area may eventually be divided into 3 reference areas as shown in fig. 3 b.
S203, acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area.
After the target image is divided into the target area and at least one reference area, a target weight of the target area and a reference weight of each reference area can be obtained, and the target weight is greater than the reference weight. The luminance value of the target area and the luminance values of the reference areas may also be obtained, the luminance value of the target area may be obtained by calculating an average value of the luminance values of the pixels in the target area, and the luminance value of each reference area may also be obtained by calculating an average value of the luminance values of the reference pixels in the reference area.
The step of obtaining the target weight of the target region and the reference weight of each reference region may include at least the following two embodiments:
the first implementation mode comprises the following steps: reference similarity between each reference region and the target region and target similarity between the target region and the target region may be obtained, the target similarity being 1. Specifically, the reference similarity between each reference region and the target region can be calculated by adopting an image similarity algorithm; the image similarity algorithm herein may include, but is not limited to: SSIM (structural similarity metric) algorithm, cosine similarity algorithm, histogram-based similarity algorithm, mutual information-based similarity algorithm, and the like. Then, calculating the total similarity value of each acquired reference similarity and target similarity; carrying out normalization processing on the target similarity according to the total similarity value to obtain the target weight of the target area; and respectively carrying out normalization processing on the reference similarity of each reference area according to the total similarity value to obtain the reference weight of each reference area.
For example, let there be a total of 3 reference regions: a reference area a, a reference area b, and a reference area c; and the reference similarity between the reference region a and the target region is 0.8, the reference similarity between the reference region b and the target region is 0.3, and the reference similarity between the reference region c and the target region is 0.5. Then a total similarity value of 1+0.8+0.3+0.5 to 2.6 can be calculated; the target similarity is normalized according to the total similarity value, and the target weight of the target area is 1/2.6-0.385. Similarly, the reference similarity of the 3 reference regions is normalized according to the total similarity value, and the reference weight of the reference region a is 0.8/2.6-0.308; the reference weight of the reference region b is 0.3/2.6 ═ 0.115; the reference weight of the reference area c is 0.5/2.6 ═ 0.192.
The second embodiment: the target weight of the target area and the total reference weight value of at least one reference area can be distributed according to a preset weight distribution rule; the weight distribution rule can be set according to actual business requirements or empirical values, and only the condition that the target weight is greater than or equal to the total reference weight value is met; for example, the weight distribution rule may be set as: the target weight is 0.6, and the total reference weight value is 0.4; alternatively, the weight distribution rule may be set as: the target weight is 0.7, the total reference weight is 0.3, and so on. Secondly, reference similarity between each reference area and the target area can be obtained; and determining the reference weight of each reference region according to the total reference weight value and the reference similarity of each reference region.
A specific implementation of determining the reference weight of each reference region according to the total reference weight value and the reference similarity of each reference region may be: firstly, respectively carrying out normalization processing on the reference similarity of each reference area according to the reference similarity of each reference area to obtain the weight proportion of each reference area; and then, multiplying the total reference weight value by the weight proportion of each reference region to obtain the reference weight of each reference region. For example, let there be a total of 3 reference regions: a reference area a, a reference area b, and a reference area c; and the reference similarity between the reference region a and the target region is 0.8, the reference similarity between the reference region b and the target region is 0.3, and the reference similarity between the reference region c and the target region is 0.5. Then, by normalizing the reference similarity of the 3 reference regions, the weight ratio of the reference region a is 0.8/(0.8+0.3+0.5) to 0.5, the weight ratio of the reference region b is 0.3/(0.8+0.3+0.5) to 0.1875, and the weight ratio of the reference region c is 0.5/(0.8+0.3+0.5) to 0.3125. If the total reference weight value is 0.4, the total reference weight value is multiplied by the weight proportion of each reference region, and the reference weight of the reference region a is 0.4 × 0.5 ═ 0.2; the reference weight of the reference region b is 0.4 × 0.1875 ═ 0.075; the reference weight of the reference region c is 0.4 × 0.3125 — 0.125.
Or, another specific implementation of determining the reference weight of each reference region according to the total value of the reference weights and the reference similarity of each reference region may be: firstly, respectively carrying out product operation on the total reference weight value and the reference similarity of each reference region to obtain the intermediate similarity of each reference region; then, normalization processing is respectively carried out on the intermediate similarity of each reference area, and the reference weight of each reference area is obtained. For example, let the total reference weight be 0.4 and there are 3 total reference regions: a reference area a, a reference area b, and a reference area c; the reference similarity between the reference region a and the target region is 0.8, the reference similarity between the reference region b and the target region is 0.3, and the reference similarity between the reference region c and the target region is 0.5. Then, the product of the total reference weight value and the reference similarity of each reference region is calculated, and the median similarity of the reference region a is 0.4 × 0.8 — 0.32, the median similarity of the reference region b is 0.4 × 0.3 — 0.12, and the median similarity of the reference region c is 0.4 × 0.5 — 0.2. Then, the intermediate similarity of the 3 reference regions is normalized, and the reference weight of the reference region a is 0.32/(0.32 +0.12+0.2) × 0.4 ═ 0.2; the reference weight of the reference region b is 0.12/(0.32+0.12+0.2) × 0.4 ═ 0.075; the reference weight of the reference region c is 0.2/(0.32+0.12+0.2) × 0.4 ═ 0.125.
And S204, weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
In a specific implementation process, the brightness value of the target area may be weighted by using the target weight to obtain a weighted brightness value of the target area; weighting the brightness value of each reference area by respectively adopting the reference weight of each reference area to obtain the weighted brightness value of each reference area; and then, summing the weighted brightness value of the target area and the weighted brightness value of each reference area to obtain a target brightness value of the target image. For example, if the brightness value of the target region is 60, the target weight is 0.385, and the weighted brightness value of the target region is 60 × 0.385 — 23.1; the luminance value of the reference region a is 100 and the reference weight is 0.308, then the weighted luminance value of the reference region is 100 × 0.308 — 30.8; the luminance value of the reference region b is 150 and the reference weight is 0.115, then the weighted luminance value of the reference region b is 150 × 0.115 — 17.25; the luminance value of the reference region c is 145 and the reference weight is 0.192, then the weighted luminance value of the reference region c is 145 × 0.192 — 27.84. Then, summing the 4 weighted brightness values to obtain a target brightness value of the target image as follows: 23.1+30.8+17.25+27.84 ═ 98.99.
After the target image is acquired, the object detection can be performed on the target image. If the target image has the target object, the attribute information of the target object can be acquired; dividing the target image into a target area and at least one reference area according to the attribute information; the target image is subjected to region division through the attribute information of the target object, so that the accuracy of region division and the difference among regions can be improved. Then, a target brightness value of the target image can be obtained by weighted summation according to the brightness value of the target area and the target weight of the target area, and the brightness value of each reference area and the reference weight of each reference area. Because the target weight is greater than the reference weight, the influence degree of the brightness value of the target area on the overall target brightness value can be improved, and the reference value of the target brightness value is improved; therefore, when the exposure strategy is adjusted according to the target brightness value, the exposure strategy can be adjusted according to the brightness value of the target area, and the image quality of the target area is further ensured.
Fig. 4 is a schematic flow chart of another image processing method according to an embodiment of the present invention. The image processing method may be performed by the above-mentioned image processing apparatus. Referring to fig. 4, the image processing method may include the following steps S401 to S408:
s401, acquiring a target image and carrying out object detection on the target image.
After acquiring the target image, the image processing device may perform object detection on the target image to detect whether the target object exists in the target image. If the target object exists in the target image, steps S402-404 may be executed; if the target object does not exist in the target image, steps S405-S408 may be performed.
S402, if the target image has the target object, acquiring attribute information of the target object, and dividing the target image into a target area and at least one reference area according to the attribute information, wherein the target area comprises the target object, and the reference area does not comprise the target object.
S403, acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight.
S404, weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
S405, if the target object does not exist in the target image, determining an image center point of the target image.
S406, dividing the target image into a plurality of areas based on the central point of the image.
In steps S405 to S406, the image center point refers to a pixel point located at the center position of the target image. After determining the image center point of the target image, the target image may be divided into a plurality of regions based on the image center point; these multiple regions should satisfy the following constraints: the areas of at least two areas in the plurality of areas are not equal, and the area of the area containing the central point of the image is the largest. Taking the division of the target image into 9 regions as an example, the schematic diagram of the image division can be seen in fig. 5 a.
S407, acquiring the brightness value of each region, and determining the weight of each region according to the area of each region, wherein the weight of each region is positively correlated with the area of each region.
For any region, the mean value of the luminance values of the respective pixels in the any region may be calculated, and the calculated mean value is taken as the luminance value of the any region. When determining the weight of each region according to the area of each region, at least the following three embodiments may be included:
the first embodiment: and determining the weight of each region according to the area of each region respectively based on the corresponding relation between the area and the weight established in advance.
The second embodiment: dividing the area of each region into at least two levels, and determining the weight corresponding to the level to which the area of each region belongs as the weight of each region according to the corresponding relation between the levels and the weights established in advance. For example, the areas of the region 1 and the region 2 belong to the first level, and the weight corresponding to the first level is 0.1, so that the weight of the region 1 and the weight of the region 2 are both 0.1.
Third embodiment: and carrying out normalization processing on the area of each region to obtain the weight of each region. For example, let a total of 4 regions be divided into the target image, and the area of the regions are: region 1 (20), region 2(45), region 3(35), region 4 (60); then, by normalizing the area of each region, it is possible to obtain a weight of 20/(20+45+35+60) of 0.125 for region 1, a weight of 45/(20+45+35+60) of 0.28125 for region 2, a weight of 35/(20+45+35+60) of 0.21875 for region 3, and a weight of 60/(20+45+35+60) of 0.375 for region 4.
And S408, carrying out weighted summation on the weight of each region and the brightness value of each region to obtain a target brightness value of the target image.
After the weights and the brightness values of the regions are obtained, the weights and the brightness values of the regions can be weighted and summed by using a weighted summation formula shown in formula 1.1, so as to obtain a target brightness value of the target image.
R=(f1*r1+f2*r2+…+fn*rn) 100% formula 1.1
In the above equation 1.1, R represents a target luminance value of a target image, fiRepresents the weight of the i-th region, riA luminance value representing the ith area; i is an e [1, n ]]N represents the number of regions, and the value of n is greater than or equal to 2.
Alternatively, the target image may be a medical image acquired using an endoscope equipped with a flash. Practice shows that the effective exposure time of the image can be controlled only by adjusting the lighting time of the flash lamp because the shooting environment in a human body is dark. Therefore, if the target brightness value is smaller than the brightness threshold, the flash duration of the flash lamp can be increased to increase the effective exposure duration of the endoscope, so that the brightness value of a new image acquired by the endoscope is larger than the target brightness value when the endoscope performs image acquisition based on the effective exposure duration.
After the target image is acquired, the object detection can be performed on the target image. If the target image has the target object, the attribute information of the target object can be acquired; dividing the target image into a target area and at least one reference area according to the attribute information; the target image is subjected to region division through the attribute information of the target object, so that the accuracy of region division and the difference among regions can be improved. Then, a target brightness value of the target image can be obtained by weighted summation according to the brightness value of the target area and the target weight of the target area, and the brightness value of each reference area and the reference weight of each reference area. Because the target weight is greater than the reference weight, the influence degree of the brightness value of the target area on the overall target brightness value can be improved, and the reference value of the target brightness value is improved; therefore, when the exposure strategy is adjusted according to the target brightness value, the exposure strategy can be adjusted according to the brightness value of the target area, and the image quality of the target area is further ensured.
In practical applications, the image processing apparatus may apply the above-mentioned image processing method to different application scenarios, such as a brightness value calculation scenario of a medical image, a brightness value calculation scenario of a vehicle image, a brightness value calculation scenario of a human face image, and so on. The following explains a specific application scenario of the image processing method by taking an image processing scheme applied to a brightness value calculation scenario of a medical image, that is, taking an image processing device as a capsule endoscope as an example:
when a certain user needs to do gastroscopy, the user can swallow the capsule endoscope; after entering the human body, the capsule endoscope can move forward along with the movement of the digestive tract of the human body. The capsule endoscope can continuously shoot the digestive tract cavity section of the approach in the advancing process. Each time an image is captured, the image may be used as a target image, and whether a target object (i.e., a lesion) exists in the target image may be detected. If so, acquiring the attribute information of the focus, and dividing the target image into a target area and at least one reference area according to the attribute information, wherein the target area comprises the focus, and the reference area does not comprise the focus. And acquiring the target weight of the target area and the reference weight of each reference area, and the brightness value of the target area and the brightness value of each reference area. And then, weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area. If not, the target image can be divided into a plurality of areas based on the central point of the image; and acquiring the brightness value of each region, and determining the weight of each region according to the area of each region. Then, the weights of the respective regions and the luminance values of the respective regions may be weighted and summed to obtain a target luminance value of the target image.
After obtaining the target brightness value of the target image, the capsule endoscope may determine whether to adjust the exposure strategy by determining whether the target brightness value is less than a brightness threshold. And if the target brightness value is smaller than the brightness threshold, increasing the flash duration of the flash lamp to increase the effective exposure duration of the capsule endoscope, so that the brightness value of the acquired new image is larger than the target brightness value when the capsule endoscope performs image acquisition based on the effective exposure duration, and the image quality of the new image is improved. After the capsule endoscope collects the images, the collected images can be transmitted to a recorder carried by a user in real time or periodically in a wireless signal mode for recording and storing, as shown in fig. 5 b. After the examination is completed, the doctor can download the image data in the recorder to the image processing software of the terminal (such as a desktop computer) for analysis, and can issue a diagnosis report for the user according to the analysis result.
Based on the description of the above embodiment of the image processing method, the embodiment of the present invention also discloses an image processing apparatus, which may be a computer program (including a program code) running in an image processing device. The image processing apparatus may perform the method shown in fig. 2 or fig. 4. Referring to fig. 6, the image processing apparatus may operate the following units:
a processing unit 101, configured to acquire a target image and perform object detection on the target image;
the processing unit 101 is configured to, if a target object exists in the target image, acquire attribute information of the target object, and divide the target image into a target area and at least one reference area according to the attribute information, where the target area includes the target object and the reference area does not include the target object;
the processing unit 101 is configured to obtain a target weight of the target region and a reference weight of each reference region, and a luminance value of the target region and a luminance value of each reference region, where the target weight is greater than the reference weight;
and the weighting unit 102 is configured to obtain a target brightness value of the target image by weighting and summing according to the brightness value and the target weight of the target region and the brightness value and the reference weight of each reference region.
In one embodiment, the attribute information of the target object includes: center coordinates and size information of the target object; when dividing the target image into a target region and at least one reference region according to the attribute information, the processing unit 101 is specifically configured to: determining a target area from the target image according to the central coordinate of the target object and the size information; the central point of the target area is the target central point indicated by the central coordinate, and the area of the target area is larger than or equal to the size indicated by the size information; determining a residual region in the target image except the target region, and acquiring pixel coordinates of each residual pixel in the residual region; and dividing the residual area into at least one reference area according to the pixel coordinates of the residual pixels.
In yet another embodiment, the target image has a plurality of area division ranges; when the processing unit 101 divides the remaining area into at least one reference area according to the pixel coordinates of the remaining pixels, it is specifically configured to: calculating the distance between each residual pixel and the target central point according to the pixel coordinate and the central coordinate of each residual pixel; and taking the region formed by the residual pixels corresponding to the distances falling into the same region dividing range as a reference region to divide the residual region into at least one reference region.
In another embodiment, when acquiring the target weight of the target region and the reference weight of each reference region, the processing unit 101 is specifically configured to: acquiring reference similarity between each reference area and the target area and target similarity between the target area and the target area, wherein the target similarity is 1; calculating the total similarity value of each acquired reference similarity and the target similarity; carrying out normalization processing on the target similarity according to the total similarity value to obtain the target weight of the target area; and respectively carrying out normalization processing on the reference similarity of each reference area according to the total similarity value to obtain the reference weight of each reference area.
In another embodiment, when the processing unit 101 is configured to obtain the target weight of the target region and the reference weight of each reference region, it is specifically configured to: distributing the target weight of the target area and the total reference weight value of the at least one reference area according to a preset weight distribution rule; acquiring reference similarity between each reference area and the target area; and determining the reference weight of each reference region according to the total reference weight value and the reference similarity of each reference region.
In another embodiment, when the processing unit 101 is configured to determine the reference weight of each reference region according to the total reference weight value and the reference similarity of each reference region, specifically, to: respectively carrying out normalization processing on the reference similarity of each reference area according to the reference similarity of each reference area to obtain the weight proportion of each reference area; and performing product operation by adopting the total reference weight value and the weight proportion of each reference area to obtain the reference weight of each reference area.
In yet another embodiment, the processing unit 101 is further configured to: if the target image does not have the target object, determining an image center point of the target image; dividing the target image into a plurality of regions based on the image center point, wherein the areas of at least two regions in the plurality of regions are not equal, and the area of the region containing the image center point is the largest; acquiring the brightness value of each region, and determining the weight of each region according to the area of each region, wherein the weight of each region is positively correlated with the area of each region; the weighting unit 102 may further be configured to perform weighted summation on the weights of the respective regions and the luminance values of the respective regions to obtain a target luminance value of the target image.
In yet another embodiment, the target image is a medical image acquired with an endoscope, the endoscope being configured with a flash; accordingly, the processing unit 101 is further operable to: and if the target brightness value is smaller than the brightness threshold, increasing the flash duration of the flash lamp to increase the effective exposure duration of the endoscope, so that the brightness value of a new image acquired by the endoscope is larger than the target brightness value when the endoscope acquires images based on the effective exposure duration.
According to an embodiment of the present invention, each step involved in the method shown in fig. 2 or fig. 4 may be performed by each unit in the image processing apparatus shown in fig. 6. For example, steps S201 to S203 shown in fig. 2 may be performed by the processing unit 101 shown in fig. 6, and step S204 may be performed by the weighting unit 102 shown in fig. 6; as another example, steps S401 to S403 and steps S405 to S407 shown in fig. 4 may be both performed by the processing unit 101 shown in fig. 6, and steps S404 and S408 may be performed by the weighting unit 102 shown in fig. 6. According to another embodiment of the present invention, the units in the image processing apparatus shown in fig. 6 may be respectively or entirely combined into one or several other units to form the image processing apparatus, or some unit(s) thereof may be further split into multiple units with smaller functions to form the image processing apparatus, which may achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the apparatus may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, the image processing apparatus device as shown in fig. 6 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the respective methods as shown in fig. 2 or fig. 4 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and an image processing method according to an embodiment of the present invention may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
After the target image is acquired, the object detection can be performed on the target image. If the target image has the target object, the attribute information of the target object can be acquired; dividing the target image into a target area and at least one reference area according to the attribute information; the target image is subjected to region division through the attribute information of the target object, so that the accuracy of region division and the difference among regions can be improved. Then, a target brightness value of the target image can be obtained by weighted summation according to the brightness value of the target area and the target weight of the target area, and the brightness value of each reference area and the reference weight of each reference area. Because the target weight is greater than the reference weight, the influence degree of the brightness value of the target area on the overall target brightness value can be improved, and the reference value of the target brightness value is improved; therefore, when the exposure strategy is adjusted according to the target brightness value, the exposure strategy can be adjusted according to the brightness value of the target area, and the image quality of the target area is further ensured.
Based on the description of the method embodiment and the device embodiment, the embodiment of the invention also provides an image processing device. Referring to fig. 7, the image processing apparatus includes at least a processor 201, an input interface 202, an output interface 203, and a computer storage medium 204. The processor 201, the input interface 202, the output interface 203, and the computer storage medium 204 in the image processing apparatus may be connected by a bus or other means.
A computer storage medium 204 may be stored in the memory of the image processing device, the computer storage medium 204 being adapted to store a computer program comprising program instructions, the processor 201 being adapted to execute the program instructions stored by the computer storage medium 204. The processor 201 (or CPU) is a computing core and a control core of the image Processing apparatus, and is adapted to implement one or more instructions, and in particular, is adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function; in one embodiment, the processor 201 according to the embodiment of the present invention may be configured to perform a series of image processing, including: acquiring a target image, and carrying out object detection on the target image; if the target image has the target object, acquiring attribute information of the target object, and dividing the target image into a target area and at least one reference area according to the attribute information, wherein the target area comprises the target object, and the reference area does not comprise the target object; acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight; and weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area, and the like.
An embodiment of the present invention further provides a computer storage medium (Memory), which is a Memory device in an image processing device and is used to store programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the image processing apparatus, and may also include an extended storage medium supported by the image processing apparatus. The computer storage medium provides a storage space that stores an operating system of the image processing apparatus. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 201. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 201 to perform the corresponding steps described above with respect to the method in the image processing embodiments; in particular implementations, one or more instructions in the computer storage medium are loaded by processor 201 and perform the following steps:
acquiring a target image, and carrying out object detection on the target image;
if the target image has the target object, acquiring attribute information of the target object, and dividing the target image into a target area and at least one reference area according to the attribute information, wherein the target area comprises the target object, and the reference area does not comprise the target object;
acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight;
and weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
In one embodiment, the attribute information of the target object includes: center coordinates and size information of the target object; accordingly, when the target image is divided into a target region and at least one reference region according to the attribute information, the one or more instructions may be further loaded and specifically executed by the processor 201: determining a target area from the target image according to the central coordinate of the target object and the size information; the central point of the target area is the target central point indicated by the central coordinate, and the area of the target area is larger than or equal to the size indicated by the size information; determining a residual region in the target image except the target region, and acquiring pixel coordinates of each residual pixel in the residual region; and dividing the residual area into at least one reference area according to the pixel coordinates of the residual pixels.
In yet another embodiment, the target image has a plurality of area division ranges; correspondingly, when the remaining area is divided into at least one reference area according to the pixel coordinates of the remaining pixels, the one or more instructions are loaded and specifically executed by the processor 201: calculating the distance between each residual pixel and the target central point according to the pixel coordinate and the central coordinate of each residual pixel; and taking the region formed by the residual pixels corresponding to the distances falling into the same region dividing range as a reference region to divide the residual region into at least one reference region.
In another embodiment, when obtaining the target weight of the target region and the reference weight of each reference region, the one or more instructions are loaded and specifically executed by the processor 201: acquiring reference similarity between each reference area and the target area and target similarity between the target area and the target area, wherein the target similarity is 1; calculating the total similarity value of each acquired reference similarity and the target similarity; carrying out normalization processing on the target similarity according to the total similarity value to obtain the target weight of the target area; and respectively carrying out normalization processing on the reference similarity of each reference area according to the total similarity value to obtain the reference weight of each reference area.
In another embodiment, when obtaining the target weight of the target region and the reference weight of each reference region, the one or more instructions are loaded and specifically executed by the processor 201: distributing the target weight of the target area and the total reference weight value of the at least one reference area according to a preset weight distribution rule; acquiring reference similarity between each reference area and the target area; and determining the reference weight of each reference region according to the total reference weight value and the reference similarity of each reference region.
In another embodiment, when determining the reference weight of each reference region according to the total value of the reference weights and the reference similarity of each reference region, the one or more instructions are loaded and specifically executed by the processor 201: respectively carrying out normalization processing on the reference similarity of each reference area according to the reference similarity of each reference area to obtain the weight proportion of each reference area; and performing product operation by adopting the total reference weight value and the weight proportion of each reference area to obtain the reference weight of each reference area.
In yet another embodiment, the one or more instructions may be further loaded and specifically executed by the processor 201: if the target image does not have the target object, determining an image center point of the target image; dividing the target image into a plurality of regions based on the image center point, wherein the areas of at least two regions in the plurality of regions are not equal, and the area of the region containing the image center point is the largest; acquiring the brightness value of each region, and determining the weight of each region according to the area of each region, wherein the weight of each region is positively correlated with the area of each region; and carrying out weighted summation on the weight of each region and the brightness value of each region to obtain a target brightness value of the target image.
In yet another embodiment, the target image is a medical image acquired with an endoscope, the endoscope being configured with a flash; the one or more instructions may also be loaded and specifically executed by processor 201: and if the target brightness value is smaller than the brightness threshold, increasing the flash duration of the flash lamp to increase the effective exposure duration of the endoscope, so that the brightness value of a new image acquired by the endoscope is larger than the target brightness value when the endoscope acquires images based on the effective exposure duration.
After the target image is acquired, the object detection can be performed on the target image. If the target image has the target object, the attribute information of the target object can be acquired; dividing the target image into a target area and at least one reference area according to the attribute information; the target image is subjected to region division through the attribute information of the target object, so that the accuracy of region division and the difference among regions can be improved. Then, a target brightness value of the target image can be obtained by weighted summation according to the brightness value of the target area and the target weight of the target area, and the brightness value of each reference area and the reference weight of each reference area. Because the target weight is greater than the reference weight, the influence degree of the brightness value of the target area on the overall target brightness value can be improved, and the reference value of the target brightness value is improved; therefore, when the exposure strategy is adjusted according to the target brightness value, the exposure strategy can be adjusted according to the brightness value of the target area, and the image quality of the target area is further ensured.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (8)

1. An image processing method, comprising:
acquiring a target image, wherein the target image is provided with a plurality of area division ranges;
carrying out object detection on the target image;
if the target image has a target object, acquiring attribute information of the target object; the target object comprises an abnormal object, the abnormal object is an object with abnormal characteristics in a target image due to abnormal faults, and the attribute information of the target object comprises: center coordinates and size information of the target object;
determining a target area from the target image according to the central coordinate of the target object and the size information; the central point of the target area is the target central point indicated by the central coordinate, and the area of the target area is larger than or equal to the size indicated by the size information;
determining a residual region in the target image except the target region, and acquiring pixel coordinates of each residual pixel in the residual region;
calculating the distance between each residual pixel and the target central point according to the pixel coordinate and the central coordinate of each residual pixel;
taking a region formed by residual pixels corresponding to each distance falling into the same region dividing range as a reference region to divide the residual region into at least one reference region;
acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight;
and weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
2. The method of claim 1, wherein the obtaining the target weight of the target region and the reference weight of each reference region comprises:
acquiring reference similarity between each reference area and the target area and target similarity between the target area and the target area, wherein the target similarity is 1;
calculating the total similarity value of each acquired reference similarity and the target similarity;
carrying out normalization processing on the target similarity according to the total similarity value to obtain the target weight of the target area;
and respectively carrying out normalization processing on the reference similarity of each reference area according to the total similarity value to obtain the reference weight of each reference area.
3. The method of any one of claim 1, wherein the obtaining the target weight of the target region and the reference weight of each reference region comprises:
distributing the target weight of the target area and the total reference weight value of the at least one reference area according to a preset weight distribution rule;
acquiring reference similarity between each reference area and the target area;
and determining the reference weight of each reference region according to the total reference weight value and the reference similarity of each reference region.
4. The method of claim 3, wherein determining the reference weight of each reference region according to the total value of the reference weights and the reference similarity of each reference region comprises:
respectively carrying out normalization processing on the reference similarity of each reference area according to the reference similarity of each reference area to obtain the weight proportion of each reference area;
and performing product operation by adopting the total reference weight value and the weight proportion of each reference area to obtain the reference weight of each reference area.
5. The method of claim 1, wherein the method further comprises:
if the target image does not have the target object, determining an image center point of the target image;
dividing the target image into a plurality of regions based on the image center point, wherein the areas of at least two regions in the plurality of regions are not equal, and the area of the region containing the image center point is the largest;
acquiring the brightness value of each region, and determining the weight of each region according to the area of each region, wherein the weight of each region is positively correlated with the area of each region;
and carrying out weighted summation on the weight of each region and the brightness value of each region to obtain a target brightness value of the target image.
6. The method of claim 1, wherein the target image is a medical image acquired with an endoscope, the endoscope being configured with a flash; the method further comprises the following steps:
and if the target brightness value is smaller than the brightness threshold, increasing the flash duration of the flash lamp to increase the effective exposure duration of the endoscope, so that the brightness value of a new image acquired by the endoscope is larger than the target brightness value when the endoscope acquires images based on the effective exposure duration.
7. An image processing apparatus characterized by comprising:
a processing unit for acquiring a target image having a plurality of area division ranges;
the processing unit is used for carrying out object detection on the target image;
the processing unit is used for acquiring attribute information of a target object if the target object exists in the target image; the target object comprises an abnormal object, the abnormal object is an object with abnormal characteristics in a target image due to abnormal faults, and the attribute information of the target object comprises: center coordinates and size information of the target object;
the processing unit is used for determining a target area from the target image according to the central coordinate of the target object and the size information; the central point of the target area is the target central point indicated by the central coordinate, and the area of the target area is larger than or equal to the size indicated by the size information;
the processing unit is used for determining the residual areas except the target area in the target image and acquiring the pixel coordinates of all residual pixels in the residual areas;
the processing unit is used for calculating the distance between each residual pixel and the target central point according to the pixel coordinate and the central coordinate of each residual pixel;
the processing unit is used for taking a region formed by residual pixels corresponding to each distance falling into the same region dividing range as a reference region so as to divide the residual region into at least one reference region;
the processing unit is used for acquiring a target weight of the target area and a reference weight of each reference area, and a brightness value of the target area and a brightness value of each reference area, wherein the target weight is greater than the reference weight;
and the weighting unit is used for weighting and summing to obtain a target brightness value of the target image according to the brightness value and the target weight of the target area and the brightness value and the reference weight of each reference area.
8. An image processing apparatus comprising an input interface and an output interface, characterized by further comprising:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium having stored thereon one or more instructions adapted to be loaded by the processor and to perform the image processing method according to any of claims 1-6.
CN201911315899.5A 2019-12-18 2019-12-18 Image processing method, device and equipment Active CN111050086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911315899.5A CN111050086B (en) 2019-12-18 2019-12-18 Image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911315899.5A CN111050086B (en) 2019-12-18 2019-12-18 Image processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN111050086A CN111050086A (en) 2020-04-21
CN111050086B true CN111050086B (en) 2021-10-19

Family

ID=70237773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911315899.5A Active CN111050086B (en) 2019-12-18 2019-12-18 Image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN111050086B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409312B (en) * 2021-08-03 2021-11-02 广东博创佳禾科技有限公司 Image processing method and device for biomedical images

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665047A (en) * 2012-05-08 2012-09-12 北京汉邦高科数字技术股份有限公司 Exposure control method for imaging of complementary metal-oxide-semiconductor (CMOS) image sensor
KR20160112464A (en) * 2015-03-19 2016-09-28 한국전자통신연구원 Object Segmentation Apparatus and Method Using Graph Cut Based on Region
CN103702037B (en) * 2013-12-04 2017-02-08 南阳理工学院 Automatic regulating method for video image brightness
CN109543523A (en) * 2018-10-18 2019-03-29 安克创新科技股份有限公司 Image processing method, device, system and storage medium
CN110099222A (en) * 2019-05-17 2019-08-06 睿魔智能科技(深圳)有限公司 A kind of exposure adjustment method of capture apparatus, device, storage medium and equipment
CN110570370A (en) * 2019-08-26 2019-12-13 Oppo广东移动通信有限公司 image information processing method and device, storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7683964B2 (en) * 2005-09-05 2010-03-23 Sony Corporation Image capturing apparatus and image capturing method
JP5744437B2 (en) * 2010-08-18 2015-07-08 キヤノン株式会社 TRACKING DEVICE, TRACKING METHOD, AND PROGRAM
EP3190784A4 (en) * 2015-11-19 2018-04-11 Streamax Technology Co., Ltd. Method and apparatus for switching region of interest
CN108174118B (en) * 2018-01-04 2020-01-17 珠海格力电器股份有限公司 Image processing method and device and electronic equipment
CN109308687A (en) * 2018-09-06 2019-02-05 百度在线网络技术(北京)有限公司 Method and apparatus for adjusting brightness of image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665047A (en) * 2012-05-08 2012-09-12 北京汉邦高科数字技术股份有限公司 Exposure control method for imaging of complementary metal-oxide-semiconductor (CMOS) image sensor
CN103702037B (en) * 2013-12-04 2017-02-08 南阳理工学院 Automatic regulating method for video image brightness
KR20160112464A (en) * 2015-03-19 2016-09-28 한국전자통신연구원 Object Segmentation Apparatus and Method Using Graph Cut Based on Region
CN109543523A (en) * 2018-10-18 2019-03-29 安克创新科技股份有限公司 Image processing method, device, system and storage medium
CN110099222A (en) * 2019-05-17 2019-08-06 睿魔智能科技(深圳)有限公司 A kind of exposure adjustment method of capture apparatus, device, storage medium and equipment
CN110570370A (en) * 2019-08-26 2019-12-13 Oppo广东移动通信有限公司 image information processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111050086A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
US9576362B2 (en) Image processing device, information storage device, and processing method to acquire a summary image sequence
JP5744437B2 (en) TRACKING DEVICE, TRACKING METHOD, AND PROGRAM
US10136804B2 (en) Automatic fundus image capture system
US9911203B2 (en) System and method for size estimation of in-vivo objects
US20130188845A1 (en) Device, system and method for automatic detection of contractile activity in an image frame
CN105635565A (en) Shooting method and equipment
US8913807B1 (en) System and method for detecting anomalies in a tissue imaged in-vivo
CN111091536A (en) Medical image processing method, apparatus, device, medium, and endoscope
CN111050086B (en) Image processing method, device and equipment
KR20180036464A (en) Method for Processing Image and the Electronic Device supporting the same
US9993143B2 (en) Capsule endoscope and capsule endoscope system
CN113962859B (en) Panorama generation method, device, equipment and medium
JP2021165944A (en) Learning method, program, and image processing apparatus
CN113822198B (en) Peanut growth monitoring method, system and medium based on UAV-RGB image and deep learning
CN113658065A (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN112633113A (en) Cross-camera human face living body detection method and system
US10201266B2 (en) Single image sensor control for capturing mixed mode images
CN113593707B (en) Stomach early cancer model training method and device, computer equipment and storage medium
CN112639868A (en) Image processing method and device and movable platform
CN114785948B (en) Endoscope focusing method and device, endoscope image processor and readable storage medium
CN112669817B (en) Language identification method and device and electronic equipment
CN113744319B (en) Capsule gastroscope trajectory tracking method and device
KR20180071532A (en) Method and Apparatus for Capturing and Storing High Resolution Endoscope Image
US20220280026A1 (en) Method of image enhancement for distraction deduction
CN113658083A (en) Eyeball image noise elimination method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221010

Address after: 401120 18 neon Road, two Road Industrial Park, Yubei District, Chongqing

Patentee after: CHONGQING JINSHAN SCIENCE & TECHNOLOGY (GROUP) Co.,Ltd.

Address before: 404100 1-1, 2-1, 3-1, building 5, No. 18, Cuiping Lane 2, Huixing street, Yubei District, Chongqing

Patentee before: Chongqing Jinshan Medical Technology Research Institute Co.,Ltd.

TR01 Transfer of patent right