CN111866400B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111866400B
CN111866400B CN202010634176.8A CN202010634176A CN111866400B CN 111866400 B CN111866400 B CN 111866400B CN 202010634176 A CN202010634176 A CN 202010634176A CN 111866400 B CN111866400 B CN 111866400B
Authority
CN
China
Prior art keywords
image information
egg
preset
brightness
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010634176.8A
Other languages
Chinese (zh)
Other versions
CN111866400A (en
Inventor
苏睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202010634176.8A priority Critical patent/CN111866400B/en
Publication of CN111866400A publication Critical patent/CN111866400A/en
Application granted granted Critical
Publication of CN111866400B publication Critical patent/CN111866400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image processing method and device, wherein the method comprises the following steps: acquiring first image information obtained by shooting a target object after exposure according to a first preset time length; acquiring second image information obtained by shooting the target object after exposure according to a second preset time length; the length of the second preset time length is different from that of the first preset time length; determining first partial image information of which the brightness does not meet a preset condition in the first image information; determining second partial image information corresponding to the first partial image information among the second image information; and fusing the first image information and the second partial image information to obtain third image information. According to the technical scheme provided by the embodiment of the application, complete details can be reserved according to images with different exposure degrees, and the problem of high dynamic range caused by chromatic aberration of different parts in the target object is solved in a lossless manner.

Description

Image processing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
An important link exists in the egg sorting process, namely damaged eggs and broken eggs in the eggs are sorted out, otherwise the eggs are transported to the market or placed for a long time and become smelly, the sanitation is influenced, meanwhile, the health of a human body is also influenced, but a problem is found in the process of photographing the whole egg, namely, the characteristics of all eggs cannot be captured by the same group of exposure parameters.
An exposure parameter is usually fixed when a camera is used for taking a picture, but a problem occurs in the data acquisition process of egg breakage detection, namely the optimal effect of each egg in the picture cannot be obtained by using one exposure parameter due to the fact that the color of the eggs is different in depth, and the characteristics of the surfaces of the eggs cannot be clearly observed and obtained from the picture due to the fact that parts of the eggs are overexposed.
In view of the technical problems in the related art, no effective solution is provided at present.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, the present application provides an image processing method and apparatus.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring first image information obtained by shooting a target object after exposure according to a first preset time length;
acquiring second image information obtained by shooting the target object after exposure according to a second preset time length; the length of the second preset time length is different from that of the first preset time length;
determining first partial image information of which the brightness does not meet a preset condition in the first image information;
determining second partial image information corresponding to the first partial image information among the second image information;
and fusing the first image information and the second partial image information to obtain third image information.
Optionally, as in the foregoing processing method, the target object is an egg tray, and one or more eggs are arranged in the egg tray; the determining of the first partial image information of which the brightness does not meet the preset condition in the first image information comprises:
positioning in the first image information to obtain egg image information corresponding to the eggs;
determining brightness information of the egg image information;
and when the brightness information does not meet a preset brightness limiting condition, obtaining the first partial image information according to the egg image information.
Optionally, as in the foregoing processing method, the positioning in the first image information to obtain egg image information corresponding to the egg includes:
inputting the first image information into a target detection result obtained by detection in an egg detection model obtained by pre-training; the poultry egg detection model comprises at least one convolution layer with a receptive field meeting a preset range;
and obtaining the egg image information according to the target detection result.
Optionally, as in the foregoing processing method, the method further includes:
determining first image size information of a sample image;
determining second image size information of each sample egg image in the sample images;
obtaining at least one anchor point frame meeting corresponding size conditions according to the size information of each second image;
training an SSD model to be trained through a preset training image to obtain a trained SSD model; wherein, each convolution layer in the SSD model to be trained is provided with all the anchor point frames;
and verifying the trained SSD model through a preset verification image, and taking the trained SSD model as the poultry egg detection model after the trained SSD model meets the preset precision requirement.
Optionally, as in the foregoing processing method, when the brightness information does not satisfy a preset brightness limitation condition, obtaining the first partial image information according to the egg image information includes:
obtaining the brightness mean value and/or the brightness variance value of the egg image information according to the brightness information;
and when the brightness mean value is not in a preset brightness range and/or the brightness variance value is not in a preset variance range, obtaining the first partial image information according to the egg image information.
Optionally, as the foregoing processing method, the method further includes:
detecting whether the eggs in the third image information are damaged or not according to a pre-trained egg damage detection model to obtain classification information of the eggs;
and generating a label corresponding to the poultry egg according to the classification information.
Optionally, in the processing method as described above, the second preset time period is shorter than the first preset time period; the acquiring of the second image information obtained by shooting the target object after exposure according to a second preset time duration includes:
taking the first preset duration as an initial duration, gradually reducing the exposure duration for shooting the target object according to a preset exposure duration gradient, and acquiring image information corresponding to the exposure duration; and shooting until the target object is exposed according to a second preset time length to obtain second image information with the brightness meeting the preset condition.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the first acquisition module is used for acquiring first image information obtained by shooting a target object after exposure is carried out according to a first preset time length;
the second acquisition module is used for acquiring second image information obtained by shooting the target object after exposure is carried out according to a second preset time length; the length of the second preset time length is different from that of the first preset time length;
the first determining module is used for determining first partial image information which meets a preset condition in the first image information;
a second determining module configured to determine second partial image information corresponding to the first partial image information in the second image information;
and the fusion module is used for fusing the first image information and the second partial image information to obtain third image information.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the processing method according to any one of the preceding claims when executing the computer program.
In a fourth aspect, the present application provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions cause the computer to execute the processing method according to any one of the foregoing.
The embodiment of the application provides an image processing method and device, wherein the method comprises the following steps: acquiring first image information obtained by shooting a target object after exposure according to a first preset time length; acquiring second image information obtained by shooting the target object after exposure according to a second preset time length; the length of the second preset time length is different from that of the first preset time length; determining first partial image information of which the brightness does not meet a preset condition in the first image information; determining second partial image information corresponding to the first partial image information among the second image information; and fusing the first image information and the second partial image information to obtain third image information. According to the technical scheme provided by the embodiment of the application, complete details can be reserved according to images with different exposure degrees, and the problem of high dynamic range caused by chromatic aberration of different parts in the target object is solved in a lossless manner.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to another embodiment of the present application;
fig. 3 is a schematic flowchart of an image processing method according to another embodiment of the present application;
fig. 4 is a schematic flowchart of an image processing method according to another embodiment of the present application;
FIG. 5 is an image of an egg tray taken with an exposure parameter according to an embodiment of the present disclosure;
FIG. 6 is an image of an egg flat taken with another exposure parameter according to an application example of the present application;
FIG. 7 is a schematic illustration of the results of egg localization for the image shown in FIG. 5 in an application example of the present application;
FIG. 8 is an image of the avian egg of FIGS. 5 and 6 stitched in an application example of the present application;
FIG. 9 is a diagram of an anchor block in an embodiment of the present application;
FIG. 10 is a graph illustrating the brightness distribution of an overexposed egg compared to the brightness distribution of a normally exposed egg according to an embodiment of the present application;
fig. 11 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
High-Dynamic Range Image (HDRI) is a main method for solving the problem of overexposure at present, and is provided with this function in many cameras, so that compared with a common Image, a final HDR Image can be synthesized from LDR (Low-Dynamic Range) images with different exposure times by using images with corresponding details of each exposure time.
The main purpose of HDRI is to synthesize a plurality of pictures obtained according to different exposure parameters, thereby achieving a visual effect.
Existing high dynamic range solutions:
HDRI is the abbreviation of High-Dynamic Range (HDR) image, i.e. High Dynamic Range image, invented to solve the High Dynamic Range problem, and simply, HDRI is an image with a very wide luminance Range, which has a data storage with greater luminance than images of other formats, and which records luminance in a manner different from that of conventional pictures, not by compressing luminance information in a non-linear manner into a color space of 8bit or 16bit, but by recording luminance information in a directly corresponding manner, which can be said to record illumination information in the picture environment, so that it can be used to "illuminate" a scene. Many HDRI documents are provided in the form of panoramas, which can be used as an environmental background to create reflections and refractions.
1. The direct result of HDRI is that the picture color fades and much detail is lost.
2. HDRI often requires multiple photographs to produce a good result, affecting the overall efficiency of the photograph.
Fig. 1 is an image processing method according to an embodiment of the present application, including the following steps S1 to S5:
the method comprises the step S1 of obtaining first image information obtained by shooting a target object after exposure is carried out according to a first preset time length.
Specifically, the target object is an object which needs to be photographed and an image is acquired; and the target object may include one or more target objects (the target object may be a part of the target object (for example, the target object is an animal, and the target object is an eye of the animal) or an individual constituting the target object (for example, the target object is a group of people, and the target object is one of the people)), and the color of each target object may have a difference in shade; the first preset time length can be set according to the standard that a target object with the darkest color can meet a preset exposure requirement I (for example, the target object cannot be too dark, so that certain specific details cannot be identified) after being shot; the setting may be made based on the criterion that a target object with the lightest color can satisfy a predetermined exposure requirement II (for example, the target object is not overexposed) after being photographed.
Therefore, after exposure is carried out on the first image information according to a first preset time length, the target object with the deepest color can generally meet the preset exposure requirement I, and specific details in the target object can be identified; or the object with the lightest color can generally meet the preset exposure requirement II, and the specific details can be identified.
S2, acquiring second image information obtained by shooting a target object after exposure according to a second preset time length; the length of the second preset time length is different from that of the first preset time length.
Specifically, the lengths of the second preset time duration and the first preset time duration are different, so that the exposure time duration of each target object in the second image information is different from the exposure time duration of each target object in the first image information.
And S3, determining first partial image information of which the brightness does not accord with preset conditions in the first image information.
Specifically, the determining that the brightness in the first image information does not meet the preset condition may be: the brightness is too bright, resulting in overexposure; or the brightness is too dark, resulting in too dark a situation where specific detail features cannot be recognized.
The first partial image information may be an image of a target object whose brightness does not meet a preset condition in the first image information, and further, the first partial image information may be an image of which a part does not meet the preset condition or an image of which the whole does not meet the preset condition.
And S4, determining second partial image information corresponding to the first partial image information in the second image information.
Specifically, the corresponding second partial image information at the same position as the first partial image information in the second image information can be determined in a position identification manner; generally, the target object and the image capturing device are in a relatively static relationship, so that the positions of the respective target objects in the second image information and the first image information are kept unchanged; when the target object and the image capturing device are in a relative motion relationship, second partial image information corresponding to the same target object as the first partial image information can be determined in the second image information by means of image recognition.
Since the exposure time of the second image information is different from that of the first image information, the exposure time of the second partial image information is different from that of the first partial image information, and therefore, the lost image information in the first partial image information can be compensated by the second partial image information.
And S5, fusing the first image information and the second partial image information to obtain third image information.
In particular, the size of the second partial image information generally coincides with the size of the first partial image information.
Optionally, the image of each target object may be cropped to obtain an image corresponding to each target object, and each image may be encoded in an encoding manner, where the cropping manner and the encoding manner of the first image information and the second image information are consistent; and replacing the first partial image information by the second partial image information according to the same rule, and splicing to obtain third image information.
The method of the embodiment can synthesize the same image through images with different exposure durations, thereby not causing the loss of information in the image, being capable of keeping complete details and being more convenient for the image analysis and other operations of the image in the later period.
As shown in fig. 2, in some embodiments, such as the aforementioned processing method, the target object is an egg flat, and one or more eggs are disposed in the egg flat; the step S3 of determining the first partial image information having the luminance not satisfying the preset condition among the first image information includes the steps S31 to S33 as follows:
and S31, positioning in the first image information to obtain egg image information corresponding to the eggs.
Specifically, the target object is an egg tray, so that the eggs in the egg tray are the target objects in the foregoing embodiments; optionally, the egg image information may include a complete image of the egg to ensure uniform exposure of the same egg.
Further, when a plurality of eggs exist in the first image information, the egg image information corresponding to each egg is obtained.
And S32, determining the brightness information of the egg image information.
Specifically, the egg image information is generally rectangular image information including the egg, and therefore the egg is a part of the rectangular image information.
Optionally, the brightness information of the egg image information may be brightness information of an image of the egg obtained by recognition or brightness information of the whole rectangular image information including the egg.
Furthermore, after the gray level processing is performed on the egg image information to obtain a gray level image, the brightness component in the gray level image is obtained to obtain the brightness information.
And S33, when the brightness information does not meet the preset brightness limiting condition, obtaining first partial image information according to the egg image information.
Specifically, the luminance limiting condition is an interval range for limiting the luminance, and when the luminance is higher than the upper limit of the interval range or lower than the lower limit of the interval range, the transition exposure or too little light entering is caused, so that the image cannot provide effective details; therefore, the egg image information is the first partial image information.
By the method in the embodiment, the image which does not meet the preset brightness requirement in the first image information can be quickly identified and obtained, and the processing efficiency is high.
As shown in fig. 3, in some embodiments, as in the foregoing processing method, the step S31 of locating the image information of the egg corresponding to the egg in the first image information includes the following steps S311 and S312:
s311, inputting the first image information into a pre-trained egg detection model to obtain a target detection result; the poultry egg detection model comprises at least one convolution layer with a receptive field meeting a preset range.
Specifically, a Receptive Field (Receptive Field) refers to an input region "seen" by a neuron in a neural network, and in a convolutional neural network, the computation of a certain element on a feature map is affected by a certain region on an input image, which is the Receptive Field of the element.
Because the size of the poultry egg is generally in a specific range, the preset range is generally selected according to the size of the poultry egg, and the convolutional layer with an excessively small or large detection range is removed.
In one optional implementation manner, as the detection scenario is simple, only the target detection needs to be performed on the place where there is an egg in each egg tray, the number of the detection frames is limited, generally 30 detection frames are used, that is, the detection frame is full, and there is no too small target, so that the detection part of the target detection frame, that is, Conv5_3, obtained by rolling up the shallow feature in the SSD network can be cut. Since there is no large target in the detection scenario, the detection part of detecting the Conv11_2 layer corresponding to the highest layer feature in the network can be cut off, only the target detection results corresponding to the middle layers Conv7, Conv8_2, Conv9_2 and Conv10_2 are reserved,
and S312, obtaining egg image information according to the target detection result.
Specifically, after eggs are detected according to the target detection result, the image information of the eggs corresponding to the detected eggs can be obtained according to the minimum circumscribed rectangle of the detected eggs.
In summary, the method in the embodiment can effectively reduce the recognition result of the useless convolution layer and improve the positioning efficiency.
In some embodiments, as in the foregoing processing method, the method further comprises steps T1 to T7 as follows:
and step T1, determining first image size information of the sample image.
Specifically, the size of a general image needs to be adjusted according to the model, and therefore, the sample image may be an image obtained by scaling the image to a preset size; for example, when the model is an SSD model, a positive direction image with a pixel size of 300 × 300 or 512 × 512 is typically required for the image.
And T2, determining second image size information of each sample egg image in the sample images.
Specifically, the sample egg image is an image corresponding to each egg captured from the sample image.
Because the shooting angle of the egg tray and the shooting tool are relatively fixed, after the size of the sample image is determined, the size of each egg is within a specific interval.
The second image size information may be image size information for images of individual sample eggs.
And T3, obtaining at least one anchor point frame meeting the corresponding size condition according to the size information of each second image.
Specifically, the largest second image size information, the smallest second image size information, the widest second image size information, the highest second image size information, and the like can be obtained according to the second image size information, and then the corresponding dimension condition (for example, the largest upper dimension limit can be obtained according to the largest width and height) can be obtained according to the information; and generally, an anchor block (anchor) meeting the maximum size in the size condition is needed, and the sizes of other anchor blocks can be obtained by up-down floating the size.
One of the optional implementation manners is: the length-width ratio of some poultry eggs is not 1:1 due to the laying positions and the like of the poultry eggs stored in the egg tray, but is close to the ratio of 1: 1; therefore, in order to accelerate the operation, the size of an anchor preset in the SSD model is modified, the anchor frame is set in a plurality of square shapes with different sizes, and optionally, the size of the anchor frame can be preset in three types, so that the size of the anchor frame can cover the imaged poultry egg.
As shown in fig. 9, in an alternative solution, a1 is a strategy for setting anchor points (three squares with different sizes) in this embodiment, and a2, a3 and a4 are anchor point setting strategies in the related art, and since eggs are in a flat plate but the ratio of the length to the width of the egg is substantially 1:1, it is difficult to achieve a good framing effect by using the settings of the anchor points in a2, a3 and a 4.
Step T4, training the SSD model to be trained through a preset training image to obtain a trained SSD model; wherein, all anchor frames are arranged in each convolution layer in the SSD model to be trained.
Inputting training images into the SSD model to be trained, and then sequentially inputting the training images into the convolutional layers; in each convolutional layer, each anchor point frame is firstly used for framing and selecting a part of image in a training image according to the preset step movement, and then the part of image is subjected to recognition training without performing recognition training on an object exceeding the size of the anchor point frame, so that the training speed can be increased.
Furthermore, the training of the model can be realized by carrying out horizontal frame labeling on the poultry eggs in the training images, then each image corresponds to one piece of labeling information, the training can be carried out, and the data enhancement processing such as turning and blurring can be carried out on the training images when the detection model is trained.
And T5, verifying a trained SSD (Single Shot Multi Box Detector) model through a preset verification image, and taking the trained SSD model as a poultry egg detection model after the trained SSD model meets the preset precision requirement.
Specifically, the images of the eggs may be randomly clipped to a blank egg tray (an egg tray without eggs) image to obtain a training image and a verification image.
The training image is an image used for training the model to be trained, and the verification image is an image used for verifying the performance (e.g., accuracy) of the trained model.
After a preset number of verification images are input into the trained SSD model, an identification result can be obtained, then the identification accuracy is obtained according to the identification result, and when the accuracy meets a preset threshold (for example, 98% and the like), the identification accuracy of the trained SSD model meets a preset accuracy requirement, so that the trained SSD model can be used as a poultry egg detection model and used for identifying poultry eggs.
In some embodiments, as in the foregoing processing method, the step S33 of obtaining the first partial image information according to the egg image information when the brightness information does not satisfy the preset brightness limiting condition includes the following steps S331 and S332:
and S331, obtaining the brightness mean value and/or the brightness variance value of the egg image information according to the brightness information.
Specifically, the distribution condition of the brightness in the egg image information can be obtained according to the brightness information of the egg image information, and then the brightness mean value and/or the variance value can be obtained according to the distribution condition.
And S332, when the brightness mean value is not within a preset brightness range and/or the brightness variance value is not within a preset variance range, obtaining first partial image information according to the egg image information.
Specifically, the case where the luminance limitation condition is not satisfied may be: the brightness mean value is not in a preset brightness range; or the brightness variance value is not in the preset variance range; further, it may be determined that the luminance limiting condition is not satisfied when both of the above conditions are satisfied.
For example, when the egg image information is converted into a gray scale image and then the brightness value is calculated, the brightness information can be obtained according to the gray scale value of the image; as shown in fig. 10, curve Q1 corresponds to the brightness profile of an overexposed egg, and curve Q2 corresponds to the brightness profile of a normally exposed egg; if the gray scale value is between 0, 255, the lower the luminance the closer the gray scale value is to 0, the higher the luminance the closer the gray scale value is to 255. Optionally, the average brightness value (gray average) does not exceed 160 as an upper threshold of the brightness range in which eggs can be identified 100% accurately.
As shown in fig. 4, in some embodiments, the processing method as described above further includes steps S6 and S7 as follows:
and S6, detecting whether the eggs in the third image information are damaged or not according to a pre-trained egg damage detection model to obtain classification information of the eggs.
Specifically, only whether the eggs are damaged or not needs to be judged, and the egg damage detection model only needs to be capable of identifying and obtaining the cracks and damaged holes on the surfaces of the eggs, and does not need to judge the length of the specific damaged cracks, the size of the damaged holes and the like; therefore, the egg breakage detection model can be a classification network-based model, and further optionally, a two-classification model can be adopted.
And S7, generating a label corresponding to the poultry egg according to the classification information.
Specifically, after the classification information of the eggs is obtained, whether the eggs are intact or not can be judged according to the classification information, and the classification information is used as a label of the eggs; furthermore, eggs in the egg tray can be numbered, and then all classification information and the labels can be correspondingly output, so that the damage degree of all eggs can be rapidly recorded.
By adopting the method in the embodiment, the integrity degree of the poultry eggs can be rapidly identified, manual screening of the poultry eggs is avoided, and the processing efficiency is improved.
In some embodiments, as in the processing method above, the second predetermined duration is shorter than the first predetermined duration; the step S2 is to acquire second image information obtained by shooting the target object after exposure according to a second preset duration, and specifically includes:
with the first preset duration as an initial duration, gradually reducing the exposure duration for shooting the target object according to the preset exposure duration gradient, and acquiring image information corresponding to the exposure duration; and shooting until the target object is exposed according to a second preset time length to obtain second image information with the brightness meeting the preset condition.
That is, when the first preset time period is T1When the preset exposure time gradient is T, the exposure time for the next shooting is T1T, acquiring corresponding image information, and judging whether the brightness of the image information meets a preset condition; if not, press againThe exposure time gradient T is decreased progressively to obtain the exposure time T12T images, recursion until the nth acquisition of the target object, and according to a second preset duration T2(T2=T1Nt, n is an integer of 1 or more) to obtain second image information having a luminance satisfying a predetermined condition (e.g., the luminance is not higher than 160 in the foregoing embodiment).
By adopting the method in the embodiment, the shooting behavior can be automatically judged without manual intervention, the automation degree of the system can be effectively improved, and the processing efficiency is improved.
Application example:
the method mainly comprises the following steps:
1. two sets of egg trays were experimentally determined after exposure to different exposure parameters, one adapted to light eggs (as shown in fig. 5) and the other adapted to dark eggs (as shown in fig. 6).
2. An egg is positioned on a graph (in the present application example, an overexposed image, that is, the image shown in fig. 5) by using an object detection algorithm, and the positioning graph is shown in fig. 7.
3. And cutting out the small images of the eggs of the two images through the positioning information obtained in the second step.
4. All the small images are subjected to gray scale processing, and only a brightness component is needed.
5. And judging whether the overexposure exists by performing brightness analysis on the small image of the long-time exposure through the brightness component.
6. If the egg is not overexposed, the processing is not carried out, if the overexposure indicates that the egg is a light-colored egg, the image corresponding to the light-colored egg in another short-time exposure image (namely, in fig. 6) is fused, specifically, the coordinate frame detected according to the target is directly replaced to the corresponding position in fig. 5, and the spliced image shown in fig. 8 is obtained.
As shown in fig. 11, according to an embodiment of another aspect of the present application, there is also provided an image processing apparatus including:
the first acquisition module 1 is used for acquiring first image information obtained by shooting a target object after exposure is carried out according to a first preset time length;
the second obtaining module 2 is configured to obtain second image information obtained by shooting a target object after exposure is performed according to a second preset duration; the length of the second preset time length is different from that of the first preset time length;
the first determining module 3 is configured to determine first partial image information that meets a preset condition in the first image information;
a second determining module 4, configured to determine second partial image information corresponding to the first partial image information in the second image information;
and the fusion module 5 is configured to fuse the first image information and the second partial image information to obtain third image information.
Specifically, the specific process of implementing the functions of each module in the apparatus according to the embodiment of the present invention may refer to the related description in the method embodiment, and is not described herein again.
According to another embodiment of the present application, there is also provided an electronic apparatus including: as shown in fig. 12, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501 is configured to implement the steps of the above-described method embodiments when executing the program stored in the memory 1503.
The bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
Embodiments of the present application also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the steps of the above-described method embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An image processing method, comprising:
acquiring first image information obtained by shooting a target object after exposure according to a first preset time length;
acquiring second image information obtained by shooting the target object after exposure according to a second preset time length; the length of the second preset time length is different from that of the first preset time length;
determining first partial image information of which the brightness does not meet a preset condition in the first image information;
determining second partial image information corresponding to the first partial image information among the second image information;
fusing the first image information and the second partial image information to obtain third image information;
the target object is an egg tray, and a plurality of eggs are arranged in the egg tray; the determining of the first partial image information of which the brightness does not meet the preset condition in the first image information comprises:
positioning in the first image information to obtain egg image information corresponding to the eggs;
determining brightness information of the egg image information;
when the brightness information does not meet a preset brightness limiting condition, obtaining the first partial image information according to the egg image information;
the positioning in the first image information to obtain egg image information corresponding to the egg comprises:
inputting the first image information into a target detection result obtained by detection in the poultry egg detection model obtained by pre-training; the poultry egg detection model comprises at least one convolution layer with a receptive field meeting a preset range, wherein the preset range is obtained according to the size of the poultry egg;
and obtaining the egg image information according to the target detection result.
2. The processing method according to claim 1, characterized in that the method further comprises:
determining first image size information of a sample image;
determining second image size information of each sample egg image in the sample images;
obtaining at least one anchor point frame meeting corresponding size conditions according to the size information of each second image;
training an SSD model to be trained through a preset training image to obtain a trained SSD model; wherein, each convolution layer in the SSD model to be trained is provided with all the anchor point frames;
and verifying the trained SSD model through a preset verification image, and taking the trained SSD model as the poultry egg detection model after the trained SSD model meets the preset precision requirement.
3. The processing method according to claim 1, wherein when the brightness information does not satisfy a preset brightness limiting condition, obtaining the first partial image information according to the egg image information comprises:
obtaining the brightness mean value and/or the brightness variance value of the egg image information according to the brightness information;
and when the brightness mean value is not in a preset brightness range and/or the brightness variance value is not in a preset variance range, obtaining the first partial image information according to the egg image information.
4. The processing method of claim 1, further comprising:
detecting whether the eggs in the third image information are damaged or not according to a pre-trained egg damage detection model to obtain classification information of the eggs;
and generating a label corresponding to the poultry egg according to the classification information.
5. The processing method according to claim 1, wherein the second preset duration is shorter than the first preset duration; the acquiring of the second image information obtained by shooting the target object after exposure according to a second preset time duration includes:
taking the first preset duration as an initial duration, gradually reducing the exposure duration for shooting the target object according to a preset exposure duration gradient, and acquiring image information corresponding to the exposure duration; and shooting until the target object is exposed according to a second preset time length to obtain second image information with the brightness meeting the preset condition.
6. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring first image information obtained by shooting a target object after exposure is carried out according to a first preset time length;
the second acquisition module is used for acquiring second image information obtained by shooting the target object after exposure is carried out according to a second preset time length; the length of the second preset time length is different from that of the first preset time length;
the first determining module is used for determining first partial image information of which the brightness does not meet a preset condition in the first image information;
a second determining module configured to determine second partial image information corresponding to the first partial image information in the second image information;
the fusion module is used for fusing the first image information and the second partial image information to obtain third image information;
the target object is an egg tray, and a plurality of eggs are arranged in the egg tray; the first determination module is to:
positioning in the first image information to obtain egg image information corresponding to the eggs; determining brightness information of the egg image information; when the brightness information does not meet a preset brightness limiting condition, obtaining the first partial image information according to the egg image information; the positioning in the first image information to obtain egg image information corresponding to the egg comprises: inputting the first image information into a target detection result obtained by detection in the poultry egg detection model obtained by pre-training; the poultry egg detection model comprises at least one convolution layer with a receptive field meeting a preset range, wherein the preset range is obtained according to the size of the poultry egg; and obtaining the egg image information according to the target detection result.
7. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor, when executing the computer program, implementing the processing method of any one of claims 1-5.
8. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the processing method of any one of claims 1 to 5.
CN202010634176.8A 2020-07-02 2020-07-02 Image processing method and device Active CN111866400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010634176.8A CN111866400B (en) 2020-07-02 2020-07-02 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010634176.8A CN111866400B (en) 2020-07-02 2020-07-02 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111866400A CN111866400A (en) 2020-10-30
CN111866400B true CN111866400B (en) 2022-01-07

Family

ID=73152068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010634176.8A Active CN111866400B (en) 2020-07-02 2020-07-02 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111866400B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140147943A (en) * 2013-06-19 2014-12-31 공석태 cc
CN107809582A (en) * 2017-10-12 2018-03-16 广东欧珀移动通信有限公司 Image processing method, electronic installation and computer-readable recording medium
CN110927167A (en) * 2019-10-31 2020-03-27 北京海益同展信息科技有限公司 Egg detection method and device, electronic equipment and storage medium
CN110991220A (en) * 2019-10-15 2020-04-10 北京海益同展信息科技有限公司 Egg detection method, egg image processing method, egg detection device, egg image processing device, electronic equipment and storage medium
CN111122582A (en) * 2019-11-11 2020-05-08 北京海益同展信息科技有限公司 Poultry egg detection method, image processing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188627B (en) * 2019-05-13 2021-11-23 睿视智觉(厦门)科技有限公司 Face image filtering method and device
CN110929755A (en) * 2019-10-21 2020-03-27 北京海益同展信息科技有限公司 Poultry egg detection method, device and system, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140147943A (en) * 2013-06-19 2014-12-31 공석태 cc
CN107809582A (en) * 2017-10-12 2018-03-16 广东欧珀移动通信有限公司 Image processing method, electronic installation and computer-readable recording medium
CN110991220A (en) * 2019-10-15 2020-04-10 北京海益同展信息科技有限公司 Egg detection method, egg image processing method, egg detection device, egg image processing device, electronic equipment and storage medium
CN110927167A (en) * 2019-10-31 2020-03-27 北京海益同展信息科技有限公司 Egg detection method and device, electronic equipment and storage medium
CN111122582A (en) * 2019-11-11 2020-05-08 北京海益同展信息科技有限公司 Poultry egg detection method, image processing method and device

Also Published As

Publication number Publication date
CN111866400A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
US11138478B2 (en) Method and apparatus for training, classification model, mobile terminal, and readable storage medium
CN108288027B (en) Image quality detection method, device and equipment
WO2019233393A1 (en) Image processing method and apparatus, storage medium, and electronic device
JP4416795B2 (en) Correction method
US7889890B2 (en) Image capture apparatus and control method therefor
US8068668B2 (en) Device and method for estimating if an image is blurred
JP2005309409A (en) Red-eye preventing device, program and recording medium with recorded program
CN108401154B (en) Image exposure degree non-reference quality evaluation method
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
CN111210399B (en) Imaging quality evaluation method, device and equipment
WO2007095483A2 (en) Detection and removal of blemishes in digital images utilizing original images of defocused scenes
CN111739110B (en) Method and device for detecting image over-darkness or over-exposure
US20200265575A1 (en) Flaw inspection apparatus and method
CN113163127A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111866400B (en) Image processing method and device
US7970238B2 (en) Method and apparatus for acquiring image of internal structure, and computer product
CN111726543B (en) Method and camera for improving dynamic range of image
CN107615743A (en) Image servicing unit and camera device
CN111784667A (en) Crack identification method and device
CN111122582B (en) Poultry egg detection method, image processing method and device
CN114049317A (en) CCD equipment automatic inspection system and method based on artificial intelligence
CN112183158B (en) Cereal type identification method of cereal cooking equipment and cereal cooking equipment
JP2014232971A (en) Image processing device, image processing method, and program
CN109146966B (en) Visual SLAM front-end processing method, system, storage medium and computer equipment
TWI826108B (en) Method for establishing defect-detection model using fake defect images and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant