WO2018120460A1 - 图像焦距检测方法、装置、设备及计算机可读存储介质 - Google Patents

图像焦距检测方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2018120460A1
WO2018120460A1 PCT/CN2017/078002 CN2017078002W WO2018120460A1 WO 2018120460 A1 WO2018120460 A1 WO 2018120460A1 CN 2017078002 W CN2017078002 W CN 2017078002W WO 2018120460 A1 WO2018120460 A1 WO 2018120460A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
area
detected
network
Prior art date
Application number
PCT/CN2017/078002
Other languages
English (en)
French (fr)
Inventor
王健宗
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2018120460A1 publication Critical patent/WO2018120460A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present invention relates to the field of image technologies, and in particular, to an image focus detection method, apparatus, device, and computer readable storage medium.
  • the shooting distance of the image directly affects the use value of the image.
  • Shooting images that are too far away can not only provide the required details, but also waste valuable storage space and consume valuable computing resources. Therefore, it is necessary to filter out the pictures with farther shooting distance before storing and processing the business pictures, but it is more labor and material to use the artificial screening to take pictures with far distances, and it is filtered with the expansion of the image data size. The difficulty will be getting bigger and bigger.
  • the main object of the present invention is to provide an image focal length detecting method, device, device and computer readable storage medium, which aim to solve the technical problem that the existing focal length does not meet the required picture in the screening process.
  • the present invention provides an image focal length detecting method, and the image focal length detecting method includes:
  • the step of obtaining the captured image to be detected, and determining, by using the preset image detection model, the target area where the target image is located in the image to be detected includes:
  • the regional generation network is a convolutional neural network
  • the method before the step of loading the candidate region into the target detection network of the image detection model to determine the target region in which the target image is located in the candidate region, the method further includes:
  • the step of determining whether the focal length of the to-be-detected picture meets the shooting requirement according to the area ratio comprises:
  • the area ratio is greater than or equal to the preset threshold, it is determined that the focal length of the picture to be detected is in accordance with the shooting requirement.
  • the step of obtaining the captured image to be detected, and determining the target area where the target image is located in the image to be detected by using the preset image detection model further includes:
  • the present invention also provides an image focal length detecting device, wherein the image focal length detecting device includes:
  • a first determining module configured to acquire a captured image to be detected, and determine, by using a preset image detection model, a target area where the target image in the to-be-detected image is located;
  • a calculation module configured to calculate a proportion of an area occupied by the target area in the to-be-detected picture
  • a second determining module configured to determine, according to the area ratio, whether a focal length of the photograph to be detected is in accordance with a shooting requirement.
  • the first determining module is further configured to acquire the image to be detected that has been captured, and load the image to be detected into a region generation network of the image detection model to determine that the target image is in the a candidate region in the picture to be detected, wherein the region generation network is a convolutional neural network; loading the candidate region into a target detection network of the image detection model to determine the target in the candidate region The target area where the image is located.
  • the first determining module comprises:
  • An acquiring unit configured to acquire a reference area corresponding to the target image in the image detection model
  • an optimization unit configured to calculate an error of the location of the target image in the candidate region and the reference region, and optimize the region generation network by using a network optimization function according to the error.
  • the second determining module comprises:
  • a determining unit configured to determine whether the area ratio is less than a preset threshold
  • a determining unit configured to determine that a focal length of the image to be detected does not meet a shooting requirement if the area ratio is less than the preset threshold; and if the area ratio is greater than or equal to the preset threshold, determine a shooting location
  • the focal length of the detected picture is in accordance with the shooting requirements.
  • the image focus detection device further includes:
  • an acquiring module configured to acquire preset data corresponding to the target image that can be detected by the image detection model
  • An adjustment module configured to adjust an area generation network of the image detection model according to the preset data, to obtain the adjusted area generation network
  • a generating module configured to generate target area training data by using the adjusted area generating network
  • An optimization module configured to optimize a target detection network of the image detection model according to the target area training data
  • a third determining module configured to determine a feature extraction layer shared by the area generation network and the target detection network, and fix the feature extraction layer.
  • the present invention also provides an image focus detection apparatus, the image focus detection apparatus including a processor and a memory;
  • the processor is configured to execute an image focus detection program stored in the memory to implement the following steps:
  • the processor is further configured to execute the image focus detection program to implement the following steps:
  • the regional generation network is a convolutional neural network
  • the candidate region is loaded into a target detection network of the image detection model to determine a target region in which the target image in the candidate region is located.
  • the processor is further configured to execute the image focus detection program to implement the following steps:
  • the processor is further configured to execute the image focus detection program to implement the following steps:
  • the area ratio is greater than or equal to the preset threshold, it is determined that the focal length of the picture to be detected is in accordance with the shooting requirement.
  • the processor is further configured to execute the image focus detection program to implement the following steps:
  • the present invention also provides a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors To achieve the following steps:
  • the one or more programs are executable by the one or more processors to implement the following steps:
  • the regional generation network is a convolutional neural network
  • the candidate region is loaded into a target detection network of the image detection model to determine a target region in which the target image in the candidate region is located.
  • the one or more programs are executable by the one or more processors to implement the following steps:
  • the one or more programs are executable by the one or more processors to implement the following steps:
  • the area ratio is greater than or equal to the preset threshold, it is determined that the focal length of the picture to be detected is in accordance with the shooting requirement.
  • said one or more programs are executable by said one or more processors to Implement the following steps:
  • the present invention determines a target area in which the target image in the image to be detected is located by using a preset image detection model, calculates an area ratio of the target area in the to-be-detected picture, and determines a shooting location according to the area ratio. It is stated whether the focal length of the detected picture meets the shooting requirements. The picture in which the focal length does not meet the requirements during the automatic screening process is realized, and the difficulty of the picture whose focal length does not meet the requirements during the screening process is reduced.
  • FIG. 1 is a schematic flow chart of a preferred embodiment of an image focal length detecting method according to the present invention
  • FIG. 2 is a schematic flowchart of obtaining a captured image to be detected, and determining a target area where the target image in the to-be-detected image is located by using a preset image detection model according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of functional modules of a preferred embodiment of an image focal length detecting device of the present invention.
  • FIG. 4 is a schematic structural diagram of a device in a hardware operating environment according to an embodiment of the present invention.
  • the invention provides an image focal length detecting method.
  • FIG. 1 is a schematic flow chart of a preferred embodiment of an image focal length detecting method according to the present invention.
  • the image focus detection method includes:
  • Step S10 Acquire a captured image to be detected, and determine a target area where the target image in the to-be-detected image is located by using a preset image detection model.
  • the image capturing model corresponding to the to-be-detected image is set in advance, and the image is detected by the detection model.
  • the target image is the main item to be displayed in the to-be-detected picture.
  • the car in the to-be-detected picture is the target. image.
  • the image detection model is preset, and the image detection model may detect one target image or may detect a plurality of target images.
  • the image detection model may be set to detect only an image of a car, or to detect an image of a car and a person, and the like.
  • each record corresponds to the tag information of one picture.
  • the first column of the folder is a complete storage path of each picture in the picture set;
  • the second column is the number of target images in each picture, such as a picture may have a car or a plurality of cars;
  • the column following the second column indicates the area marked by the target image in each picture in the picture set, that is, the coordinates of the target image in the picture, such as the coordinates topLeft_x and topLeft_y in the upper left corner, and
  • the lower right corner coordinates are represented by bottomRight_x and bottomRight_y. It can be understood that if the number of target images in a picture is greater than 1, the picture corresponds to a plurality of upper left corner coordinates and a plurality of lower right corner coordinates. If the number of second columns in the folder is greater than or equal to 1, then there will be at least 4 column numbers after the second column of the list, and the number of columns following the second column must be a multiple of 4.
  • the image detection model includes two parts.
  • the first part is a region generation network, and is used to generate a candidate region where the target image is located in the to-be-detected image, where the candidate region is in the to-be-detected image, and the target image may be A rectangular area that exists;
  • the second part is a target detection network for determining a target area in which the target image is located in the candidate area.
  • the area generation network is a deep full convolutional neural network.
  • the convolutional neural network is a feedforward neural network, and its artificial neurons can respond to a part of the coverage area, for large Image processing has an excellent performance.
  • the basic structure of the convolutional neural network includes two layers, one of which is a feature extraction layer, and the input of each neuron is connected to the local acceptance domain of the previous layer, and the local feature is extracted, once the local feature is extracted
  • the positional relationship between it and other features is also determined; the second is the feature mapping layer, each computing layer of the network is composed of multiple feature maps, each feature map is a plane, the weight of all neurons on the plane The values are equal.
  • the image focus detection method further includes:
  • Step a acquiring preset data corresponding to the target image that can be detected by the image detection model
  • Step b adjusting an area generation network of the image detection model according to the preset data, and obtaining the adjusted area generation network;
  • the region generation network in the image detection model is trained, that is, the image detection model is optimized.
  • the area generation network in the image detection model is trained by: inputting a picture corresponding to the target image detected by the image detection model in the area generation network, that is, acquiring the image detection model Preset data corresponding to the target image that can be detected. It can be understood that the preset data is a picture corresponding to the target image. After obtaining the picture corresponding to the target image, testing the area generation network according to the picture corresponding to the target image, obtaining a test result, adjusting the area generation network according to the test result, and obtaining the adjusted Area generation network. In this embodiment, in order to reduce the time for training the area to generate the network, network initialization may be generated for the area.
  • Step c generating the target area training data by using the adjusted area generating network
  • Step d optimizing a target detection network of the image detection model according to the target area training data
  • Step e determining a feature extraction layer shared by the area generation network and the target detection network, and fixing the feature extraction layer.
  • the target region training data is generated in the adjusted region generation network by inputting the image in the region generation network, and the image detection is tested according to the target region training data.
  • the target detection network of the model obtains test results, and the target detection network is optimized according to the test structure.
  • acquiring an optimized feature extraction layer of the target detection network initializing a feature extraction layer of the region generation network by using a feature extraction layer of the target detection network, and fixing the region generation network Feature extraction layer.
  • the feature extraction layer of the area generation network is fixed, the area generation network is generated
  • the feature extraction layer of the network is copied into the target detection network to fix the feature detection layer shared by the target detection network and the area generation network. It can be understood that the area generation network and the target detection network share a feature extraction layer, that is, share a multi-layer convolution layer.
  • the area generation network and the target detection network are alternately optimized.
  • Step S20 Calculate an area ratio of the target area in the to-be-detected picture.
  • Step S30 determining, according to the area ratio, whether the focal length of the picture to be detected is in accordance with the shooting requirement.
  • step S30 includes:
  • Step f determining whether the area ratio is less than a preset threshold
  • Step g if the area ratio is less than the preset threshold, determining that the focal length of the picture to be detected does not meet the shooting requirement;
  • step h if the area ratio is greater than or equal to the preset threshold, it is determined that the focal length of the picture to be detected is consistent with the shooting requirement.
  • Determining, according to the area ratio, whether the focal length of the image to be detected meets the shooting requirement is: determining whether the area ratio is less than a preset threshold, wherein the preset threshold is set according to specific needs, such as Set to 0.05, 0.08, or 0.10, etc. When the area ratio is smaller than the preset threshold, determining that the focal length of the picture to be detected does not meet the shooting requirement; when the area ratio is greater than or equal to the preset threshold, determining to take the picture to be detected The focal length meets the shooting requirements.
  • the prompt information is output, prompting the user to take the picture to be detected too far, and the picture to be detected needs to be re-photographed;
  • the prompt information is output, and the image to be detected captured by the user is prompted to meet the shooting requirement, and the to-be-detected picture is stored.
  • the target image of the acquired image to be detected is determined by using a preset image detection model. And determining, according to the area ratio, whether a focal length of the to-be-detected picture meets a shooting requirement, and determining a ratio of an area occupied by the target area in the to-be-detected picture.
  • the picture in which the focal length does not meet the requirements during the automatic screening process is realized, and the difficulty of the picture whose focal length does not meet the requirements during the screening process is reduced.
  • the step S10 includes:
  • Step S11 Acquire the captured image to be detected, and load the to-be-detected image into the area generation network of the image detection model to determine a candidate area of the target image in the to-be-detected picture, where
  • the area generation network is a convolutional neural network
  • Step S12 loading the candidate region into a target detection network of the image detection model to determine a target region in which the target image in the candidate region is located.
  • the picture to be detected When the picture to be detected that has been captured is acquired, the picture to be detected is loaded into the area generation network of the image detection model, and the candidate area in the picture to be detected is determined by the area generation network.
  • the candidate region is loaded into a target detection network of the image detection model to determine a target region in which the target image in the candidate region is located.
  • the shape of the candidate area is a rectangle
  • the area generation network is a convolutional neural network. In order to improve the speed at which the target area in which the target image is located is determined, the area generation network and the target detection network share a feature extraction layer,
  • the specific process of determining the target area in which the target image is located is: setting a small convolution network with an input dimension of nx n on the feature map outputted in the last convolutional layer of the area generation network.
  • the dimension nx n of the convolution network is smaller than the dimension N x N of the last convolution layer of the region generation network (the n and N are positive integers), that is, the last of the region generation network
  • the area covered by a convolutional layer maps to a lower-dimensional feature mapping layer.
  • the feature mapping layer is connected to two parallel fully connected layers. In this embodiment, the two fully connected layers are referred to as a cls layer and a reg layer, respectively.
  • the cls layer is configured to determine a probability that the candidate region contains a target image, that is, a probability that the candidate region contains a target image, and the reg layer is used to determine that the target image is located in the candidate region. Position to determine the size and displacement of the target image.
  • the region generation network is a scale when the small convolution network dimension is set to 3x3.
  • the output is a convolutional network layer of 256, which is followed by two fully convolved layer cls layers and reg layers.
  • the 3x3 convolution kernel generates the candidate region in a total of 9 modes of 3 scaling and 3 width and height modes at each position to determine a target image loaded in a candidate region in the target detection network. Size and displacement are robust.
  • the method further includes:
  • Step i acquiring a reference area corresponding to the target image in the image detection model
  • Step j calculating an error of the location of the target image in the candidate region and the reference region, and optimizing the region generation network by a network optimization function according to the error.
  • a reference region corresponding to the target image in the image detection model is acquired, and the reference region is determined by the annotation information stored in the image detection model. Determining an upper left corner coordinate and a lower right corner coordinate of a position where the target image is located in the candidate region, and determining an upper left corner coordinate and a lower right corner coordinate of a position of the target image in the reference region.
  • the upper left corner coordinate and the lower right corner coordinate determine the range of the target image in the reference area, which is recorded as the second range. Calculating an intersection between the first range and the second range, and calculating a union between the first range and the second range, dividing the intersection by the union, to obtain the An error between the candidate region and the location of the target image in the reference region. Comparing the error with a preset error to determine whether the error is greater than a preset error.
  • the preset error may be set according to specific needs. In this embodiment, the preset error is set to 0.7.
  • the region generation network is optimized by the network optimization function according to the error, specifically, the neurons in the region generation network are optimized.
  • the network optimization function L is:
  • i is an index of a set of candidate regions composed of a plurality of candidate regions
  • p i is a probability that a target image exists in the i-th candidate region. Indicates whether the candidate region contains the target image, and the value is 0 or 1. When the value is 1, it indicates that the candidate region contains the target image, and when the value is 0, the candidate region is not The inclusion of the target image is determined by the error.
  • t i is the coordinate of the target image predicted by the region generation network in the candidate region, and is a 4-dimensional vector in form.
  • D cls is the number of input target images in the candidate region.
  • D cls 256
  • D reg is a new candidate region obtained by performing three kinds of scaling and three aspect ratio conversions on the candidate region.
  • the number, in this embodiment, D reg 256 * 9.
  • is set to 10 to balance the importance of determining the candidate area and the target area. It will be appreciated that in other embodiments, the D cls , D reg , and ⁇ may be set to other values as desired.
  • the image to be detected is loaded into the area generation network and the target detection network of the image detection model, and the target area where the target image is located in the network to be detected is obtained, thereby automatically determining according to the target area. It is photographed whether the focal length of the picture to be detected satisfies the shooting requirement.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.
  • the present invention further provides an image focus detection apparatus 100.
  • FIG. 3 is a schematic diagram of functional modules of a preferred embodiment of the image focal length detecting apparatus 100 of the present invention.
  • the block diagram shown in FIG. 3 is merely an exemplary diagram of a preferred embodiment, and those skilled in the art can surround the module of the image focus detection apparatus 100 shown in FIG.
  • the new module is easily added; the name of each module is a custom name, which is only used to assist in understanding the various program function blocks of the image focus detection device 100, and is not used to limit the technical solution of the present invention.
  • the core of the technical solution of the present invention is The functions to be achieved by the modules that each define the name.
  • the image focus detection apparatus 100 includes:
  • the first determining module 10 is configured to acquire a captured image to be detected, and determine, by using a preset image detection model, a target area where the target image in the to-be-detected image is located.
  • the image capturing model corresponding to the to-be-detected image is set in advance, and the image is detected by the detection model.
  • the image detection model is preset, and the image detection model may detect one target image or may detect a plurality of target images.
  • the image detection model may be set to detect only an image of a car, or to detect an image of a car and a person, and the like.
  • each record corresponds to the tag information of one picture.
  • the first column of the folder is a complete storage path of each picture in the picture set;
  • the second column is the number of target images in each picture, such as a picture may have a car or a plurality of cars;
  • the column following the second column indicates the area marked by the target image in each picture in the picture set, that is, the coordinates of the target image in the picture, such as the coordinates topLeft_x and topLeft_y in the upper left corner, and
  • the lower right corner coordinates are represented by bottomRight_x and bottomRight_y. It can be understood that if the number of target images in a picture is greater than 1, the picture corresponds to a plurality of upper left corner coordinates and a plurality of lower right corner coordinates. If the number of second columns in the folder is greater than or equal to 1, then there will be at least 4 column numbers after the second column of the list, and the number of columns following the second column must be a multiple of 4.
  • the image detection model includes two parts.
  • the first part is a region generation network, and is used to generate a candidate region where the target image is located in the to-be-detected image, where the candidate region is in the to-be-detected image, and the target image may be A rectangular area that exists;
  • the second part is a target detection network for determining a target area in which the target image is located in the candidate area.
  • the area generation network is a deep full convolutional neural network.
  • the convolutional neural network is a feedforward neural network whose artificial neurons can respond to a surrounding area of a part of the coverage and have excellent performance for large image processing.
  • the basic structure of the convolutional neural network includes two layers, one of which is a feature extraction layer, and the input of each neuron is connected to the local acceptance domain of the previous layer, and the local feature is extracted, once the local feature is extracted
  • the positional relationship between it and other features is also determined; the second is the feature mapping layer, each computing layer of the network is composed of multiple feature maps, each feature map is a plane, the weight of all neurons on the plane The values are equal.
  • the image focus detection apparatus 100 further includes:
  • an acquiring module configured to acquire preset data corresponding to the target image that can be detected by the image detection model
  • An adjustment module configured to adjust an area generation network of the image detection model according to the preset data, to obtain the adjusted area generation network
  • the region generation network in the image detection model is trained, that is, the image detection model is optimized.
  • the area generation network in the image detection model is trained by: inputting a picture corresponding to the target image detected by the image detection model in the area generation network, that is, acquiring the image detection model Preset data corresponding to the target image that can be detected. It can be understood that the preset data is a picture corresponding to the target image. After obtaining the picture corresponding to the target image, testing the area generation network according to the picture corresponding to the target image, obtaining a test result, adjusting the area generation network according to the test result, and obtaining the adjusted Area generation network. In this embodiment, in order to reduce the time for training the area to generate the network, network initialization may be generated for the area.
  • a generating module configured to generate target area training data by using the adjusted area generating network
  • An optimization module configured to optimize a target detection network of the image detection model according to the target area training data
  • a third determining module configured to determine a feature extraction layer shared by the area generation network and the target detection network, and fix the feature extraction layer.
  • the target region training data is generated in the adjusted region generation network by inputting the image in the region generation network, and the image detection is tested according to the target region training data.
  • the target detection network of the model obtains test results, and the target detection network is optimized according to the test structure.
  • acquiring an optimized feature extraction layer of the target detection network initializing a feature extraction layer of the region generation network by using a feature extraction layer of the target detection network, and fixing the region generation network Feature extraction layer.
  • the feature extraction layer of the area generation network is fixed, the feature extraction layer of the area generation network is copied into the target detection network to fix the feature extraction layer shared by the target detection network and the area generation network .
  • the area generation network and the target detection network share a feature extraction layer, that is, share a multi-layer convolution layer. Generating the network and the target detection network in the process of training the area generation network and the target detection network Alternate optimization.
  • the calculating module 20 is configured to calculate an area ratio of the target area in the to-be-detected picture.
  • the second determining module 30 is configured to determine, according to the area ratio, whether a focal length of the photograph to be detected is in accordance with a shooting requirement.
  • the second determining module 30 includes:
  • a determining unit configured to determine whether the area ratio is less than a preset threshold
  • a determining unit configured to determine that a focal length of the image to be detected does not meet a shooting requirement if the area ratio is less than the preset threshold; and if the area ratio is greater than or equal to the preset threshold, determine a shooting location
  • the focal length of the detected picture is in accordance with the shooting requirements.
  • Determining, according to the area ratio, whether the focal length of the image to be detected meets the shooting requirement is: determining whether the area ratio is less than a preset threshold, wherein the preset threshold is set according to specific needs, such as Set to 0.05, 0.08, or 0.10, etc. When the area ratio is smaller than the preset threshold, determining that the focal length of the picture to be detected does not meet the shooting requirement; when the area ratio is greater than or equal to the preset threshold, determining to take the picture to be detected The focal length meets the shooting requirements.
  • the prompt information is output, prompting the user to take the picture to be detected too far, and the picture to be detected needs to be re-photographed;
  • the prompt information is output, and the image to be detected captured by the user is prompted to meet the shooting requirement, and the to-be-detected picture is stored.
  • a target image region in which the target image is to be detected is determined by using a preset image detection model, and an area ratio of the target region in the to-be-detected image is calculated, and the shooting is determined according to the area ratio. Whether the focal length of the picture to be detected meets the shooting requirement. The picture in which the focal length does not meet the requirements during the automatic screening process is realized, and the difficulty of the picture whose focal length does not meet the requirements during the screening process is reduced.
  • the first determining module 10 is further configured to acquire the photograph to be detected that has been taken. And loading the to-be-detected picture into the area generation network of the image detection model to determine a candidate area of the target image in the to-be-detected picture, where the area generation network is a convolutional neural network; The candidate region is loaded into a target detection network of the image detection model to determine a target region in which the target image in the candidate region is located.
  • the picture to be detected When the picture to be detected that has been captured is acquired, the picture to be detected is loaded into the area generation network of the image detection model, and the candidate area in the picture to be detected is determined by the area generation network.
  • the candidate region is loaded into a target detection network of the image detection model to determine a target region in which the target image in the candidate region is located.
  • the shape of the candidate area is a rectangle
  • the area generation network is a convolutional neural network. In order to improve the speed at which the target area in which the target image is located is determined, the area generation network and the target detection network share a feature extraction layer,
  • the specific process of determining the target area in which the target image is located is: setting a small convolution network with an input dimension of nx n on the feature map outputted in the last convolutional layer of the area generation network.
  • the dimension nx n of the convolution network is smaller than the dimension N x N of the last convolution layer of the region generation network (the n and N are positive integers), that is, the last of the region generation network
  • the area covered by a convolutional layer maps to a lower-dimensional feature mapping layer.
  • the feature mapping layer is connected to two parallel fully connected layers. In this embodiment, the two fully connected layers are referred to as a cls layer and a reg layer, respectively.
  • the cls layer is configured to determine a probability that the candidate region contains a target image, that is, a probability that the candidate region contains a target image, and the reg layer is used to determine that the target image is located in the candidate region. Position to determine the size and displacement of the target image.
  • the area generation network is a convolution kernel with a scale of 3x3
  • the output is a convolutional network layer of 256
  • the convolutional network layer is followed by two complete connections.
  • Convolutional layer cls layer and reg layer.
  • the 3x3 convolution kernel generates the candidate region in a total of 9 modes of 3 scaling and 3 width and height modes at each position to determine a target image loaded in a candidate region in the target detection network. Size and displacement are robust.
  • the first determining module 10 includes:
  • An acquiring unit configured to acquire a reference area corresponding to the target image in the image detection model
  • an optimization unit configured to calculate an error of the location of the target image in the candidate region and the reference region, and optimize the region generation network by using a network optimization function according to the error.
  • a reference region corresponding to the target image in the image detection model is acquired, and the reference region is determined by the annotation information stored in the image detection model. Determining an upper left corner coordinate and a lower right corner coordinate of a position where the target image is located in the candidate region, and determining an upper left corner coordinate and a lower right corner coordinate of a position of the target image in the reference region.
  • the upper left corner coordinate and the lower right corner coordinate determine the range of the target image in the reference area, which is recorded as the second range. Calculating an intersection between the first range and the second range, and calculating a union between the first range and the second range, dividing the intersection by the union, to obtain the An error between the candidate region and the location of the target image in the reference region. Comparing the error with a preset error to determine whether the error is greater than a preset error.
  • the preset error may be set according to specific needs. In this embodiment, the preset error is set to 0.7.
  • the region generation network is optimized by the network optimization function according to the error, specifically, the neurons in the region generation network are optimized.
  • the network optimization function L is:
  • i is an index of a set of candidate regions composed of a plurality of candidate regions
  • p i is a probability that a target image exists in the i-th candidate region. Indicates whether the candidate region contains the target image, and the value is 0 or 1. When the value is 1, it indicates that the candidate region contains the target image, and when the value is 0, the candidate region is not The inclusion of the target image is determined by the error.
  • t i is a coordinate of the target image predicted by the region generation network in the candidate region, and is a 4-dimensional vector in form.
  • D cls is the number of input target images in the candidate region.
  • D cls 256
  • D reg is a new candidate region obtained by performing three kinds of scaling and three aspect ratio conversions on the candidate region.
  • the number, in this embodiment, D reg 256 * 9.
  • is set to 10 to balance the importance of determining the candidate area and the target area. It will be appreciated that in other embodiments, the D cls , D reg , and ⁇ may be set to other values as desired.
  • the image to be detected is loaded into the area generation network and the target detection network of the image detection model, and the target area where the target image is located in the network to be detected is obtained, thereby automatically determining according to the target area. It is photographed whether the focal length of the picture to be detected satisfies the shooting requirement.
  • the above first determining module 10, computing module 20, and second determining module 30 and the like may be embedded in or independent of the image focus detecting device in hardware, or may be stored in software.
  • the image focus detection device is in the memory, so that the processor calls to perform the operations corresponding to the above respective modules.
  • the processor can be a central processing unit (CPU), a microprocessor, a microcontroller, or the like.
  • FIG. 4 is a schematic structural diagram of a device in a hardware operating environment according to an embodiment of the present invention.
  • the image focal length detecting device may be a PC, or may be a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, and an MP4 ( Moving Picture Experts Group Audio Layer IV, dynamic video experts compress standard audio layers 3) terminal devices such as players and portable computers.
  • a PC or may be a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, and an MP4 ( Moving Picture Experts Group Audio Layer IV, dynamic video experts compress standard audio layers 3) terminal devices such as players and portable computers.
  • MP3 Motion Picture Experts Group Audio Layer III
  • MP4 Moving Picture Experts Group Audio Layer IV, dynamic video experts compress standard audio layers
  • the image focus detection apparatus may include a processor 1001, such as a CPU, and a memory 1005.
  • the processor 1001 and the memory 1005 can implement a communication connection through the communication bus 1002.
  • the memory 1005 may be a high speed RAM memory or a non-volatile memory such as a disk memory.
  • the memory 1005 can also optionally be a storage device independent of the aforementioned processor 1001.
  • the image focus detection device may further include a user interface, a network interface, a camera, an RF (Radio Frequency) device, a sensor, an audio circuit, a WiFi module, and the like.
  • the user interface may include a display, an input unit such as a keyboard, and the user interface may also include a standard wired interface and a wireless interface.
  • the network interface can optionally include a standard wired interface or a wireless interface (such as a WI-FI interface).
  • the picture to be detected can be acquired by the camera.
  • the image focus detection device structure shown in FIG. 4 does not constitute a limitation of the image focus detection device, and may include more or less components than those illustrated, or may combine some components, or different. Parts layout.
  • an operating system As shown in FIG. 4, an operating system, a network communication module, and an image focus detection program may be included in the memory 1005 as a computer storage medium.
  • the operating system is a program that manages and controls the hardware and software resources of the image focus detection device, supporting the operation of the image focus detection program and other software and/or programs.
  • the network communication module is used to implement communication between the memory 1005 and other hardware and software in the image focus detection device.
  • the processor 1001 can be configured to execute an image focus detection program stored in the memory 1005, implementing the following steps:
  • processor 1001 is further configured to execute the image focus detection program to implement the following steps:
  • the regional generation network is a convolutional neural network
  • the candidate region is loaded into a target detection network of the image detection model to determine a target region in which the target image in the candidate region is located.
  • processor 1001 is further configured to execute the image focus detection program to implement the following steps:
  • processor 1001 is further configured to execute the image focus detection program to implement the following steps:
  • the area ratio is greater than or equal to the preset threshold, it is determined that the focal length of the picture to be detected is in accordance with the shooting requirement.
  • processor 1001 is further configured to execute the image focus detection program to implement the following steps:
  • the embodiment of the image focal length detecting device of the present invention is substantially the same as the embodiment of the image focal length detecting method and device, and details are not described herein again.
  • the present invention provides a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the following steps:
  • the one or more programs may be executed by the one or more processors to implement the following steps:
  • the regional generation network is a convolutional neural network
  • the candidate region is loaded into a target detection network of the image detection model to determine a target region in which the target image in the candidate region is located.
  • the one or more programs may be executed by the one or more processors. To achieve the following steps:
  • the one or more programs may be executed by the one or more processors to implement the following steps:
  • the area ratio is greater than or equal to the preset threshold, it is determined that the focal length of the picture to be detected is in accordance with the shooting requirement.
  • the one or more programs may be executed by the one or more processors to implement the following steps:
  • the embodiment of the computer readable storage medium of the present invention is substantially the same as the embodiment of the image focus detection method and apparatus described above, and details are not described herein again.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • a storage medium such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种图像焦距检测方法、检测装置、设备及计算机可读存储介质,所述方法包括:获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域(S10);计算所述目标区域在所述待检测图片中所占的面积比例(S20);根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求(S30)。所述方法、检测装置、设备及计算机可读存储介质实现了自动筛选拍摄过程中焦距不符合要求的图片,降低了筛选拍摄过程中焦距不符合要求的图片的难度。

Description

图像焦距检测方法、装置、设备及计算机可读存储介质 技术领域
本发明涉及图像技术领域,尤其涉及一种图像焦距检测方法、装置、设备及计算机可读存储介质。
背景技术
在一些对图片细节要求比较高的业务场景中,图片的拍摄距离直接影响图片的使用价值。拍摄距离过远的图片由于无法提供需要的细节信息,不仅浪费存储空间而且还会消耗宝贵的计算资源。因此,在存储和处理业务图片之前进行筛选掉拍摄距离较远的图片就显得很有必要,但使用人工筛选拍摄距离较远的图片比较消耗人力与物力,而且随着图片数据规模的扩大,筛选的难度会越来越大。
发明内容
本发明的主要目的在于提供一种图像焦距检测方法、装置、设备及计算机可读存储介质,旨在解决现有的筛选拍摄过程中焦距不符合要求的图片难度大的技术问题。
为实现上述目的,本发明提供的一种图像焦距检测方法,所述图像焦距检测方法包括:
获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
计算所述目标区域在所述待检测图片中所占的面积比例;
根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
优选地,所述获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域的步骤包括:
获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述 候选区域中的所述目标图像所在的目标区域。
优选地,所述将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域的步骤之前,还包括:
获取所述图像检测模型中与所述目标图像对应的参考区域;
计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
优选地,所述根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求的步骤包括:
判断所述面积比例是否小于预设阈值;
若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;
若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
优选地,所述获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域的步骤之前,还包括:
获取与所述图像检测模型所能检测的目标图像对应的预设数据;
根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
通过调整后的所述区域生成网络生成目标区域训练数据;
根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
此外,为实现上述目的,本发明还提供一种图像焦距检测装置,所述图像焦距检测装置包括:
第一确定模块,用于获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
计算模块,用于计算所述目标区域在所述待检测图片中所占的面积比例;
第二确定模块,用于根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
优选地,所述第一确定模块还用于获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
优选地,所述第一确定模块包括:
获取单元,用于获取所述图像检测模型中与所述目标图像对应的参考区域;
优化单元,用于计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
优选地,所述第二确定模块包括:
判断单元,用于判断所述面积比例是否小于预设阈值;
确定单元,用于若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
优选地,所述图像焦距检测装置还包括:
获取模块,用于获取与所述图像检测模型所能检测的目标图像对应的预设数据;
调整模块,用于根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
生成模块,用于通过调整后的所述区域生成网络生成目标区域训练数据;
优化模块,用于根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
第三确定模块,用于确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
此外,为实现上述目的,本发明还提供一种图像焦距检测设备,所述图像焦距检测设备包括处理器和存储器;
所述处理器用于执行存储器中存储的图像焦距检测程序,以实现以下步骤:
获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
计算所述目标区域在所述待检测图片中所占的面积比例;
根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
优选地,所述处理器还用于执行所述图像焦距检测程序,以实现以下步骤:
获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
优选地,所述处理器还用于执行所述图像焦距检测程序,以实现以下步骤:
获取所述图像检测模型中与所述目标图像对应的参考区域;
计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
优选地,所述处理器还用于执行所述图像焦距检测程序,以实现以下步骤:
判断所述面积比例是否小于预设阈值;
若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;
若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
优选地,所述处理器还用于执行所述图像焦距检测程序,以实现以下步骤:
获取与所述图像检测模型所能检测的目标图像对应的预设数据;
根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
通过调整后的所述区域生成网络生成目标区域训练数据;
根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
此外,为实现上述目的,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现以下步骤:
获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
计算所述目标区域在所述待检测图片中所占的面积比例;
根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
优选地,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
优选地,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
获取所述图像检测模型中与所述目标图像对应的参考区域;
计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
优选地,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
判断所述面积比例是否小于预设阈值;
若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;
若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
优选地,所述一个或者多个程序可被所述一个或者多个处理器执行,以 实现以下步骤:
获取与所述图像检测模型所能检测的目标图像对应的预设数据;
根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
通过调整后的所述区域生成网络生成目标区域训练数据;
根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
本发明通过预置的图像检测模型确定所获取的待检测图片中目标图像所在的目标区域,计算所述目标区域在所述待检测图片中所占的面积比例,根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。实现了自动筛选拍摄过程中焦距不符合要求的图片,降低了筛选拍摄过程中焦距不符合要求的图片的难度。
附图说明
图1为本发明图像焦距检测方法的较佳实施例的流程示意图;
图2为本发明实施例中获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域的一种流程示意图;
图3为本发明图像焦距检测装置的较佳实施例的功能模块示意图;
图4是本发明实施例方案涉及的硬件运行环境的设备结构示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种图像焦距检测方法。
参照图1,图1为本发明图像焦距检测方法较佳实施例的流程示意图。
在本实施例中,所述图像焦距检测方法包括:
步骤S10,获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域。
当获取到已拍摄的待检测图片,需要确定拍摄所述待检测图片的焦距是否满足用户要求时,获取预先设置好与所述待检测图片对应的图像检测模型,通过所述检测模型确定所述待检测图片中目标图像所在的目标区域。需要说明的是,所述目标图像是所述待检测图片中所要显示的主要物品,如当所述待检测图片所要显示的是一辆车,则所述待检测图片中的车就是所述目标图像。所述图像检测模型是预先设置好的,所述图像检测模型可检测一个目标图像,也可以检测多个目标图像。如所述图像检测模型可设置为只检测车的图像,或者设置为检测车和人的图像等。
进一步地,在设置所述图像检测模型过程中,先收集所述图像检测模型所要检测的目标图像所对应的图片集合,其中,所述图片集合中包含了多张同一目标图像的图片,如有10张含有汽车的图片。对所述图片集合中的目标图像进行标注,得到所述图片集合的标注信息,将每张图片的标注信息以列表的形式存储在同一个文件夹中。在所述文件夹中,每一条记录对应着一张图片的标记信息。需要说明的是,所述文件夹的第一列是所述图片集合中每张图片完整的存储路径;第二列为每张图片中目标图像的个数,如一张图片中可能有一辆车或者多辆车;第二列后面的列表示所述图片集合中每张图片中目标图像标注的区域,即所述目标图像在所述图片中的坐标,如用左上角的坐标topLeft_x和topLeft_y,以及右下角坐标bottomRight_x和bottomRight_y表示。可以理解的是,若某张图片中的目标图像的个数大于1,那么该张图片对应着多个左上角坐标和多个右下角坐标。如果所述文件夹中第二列的数目大于或者等于1,那么所述列表的第二列之后至少会存在4列数目,且所述第二列后面的列的数目一定是4的倍数。
所述图像检测模型包括两部分,第一部分为区域生成网络,用于生成所述待检测图片中目标图像所在的候选区域,所述候选区域为在所述待检测图片中,所述目标图像可能存在的矩形区域;第二部分是目标检测网络,用于在所述候选区域中确定所述目标图像所在的目标区域。需要说明的是,所述区域生成网络是一个深度全卷积神经网络。所述卷积神经网络是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,对于大型 图像处理有出色表现。所述卷积神经网络的基本结构包括两层,其一为特征提取层,每个神经元的输入与前一层的局部接受域相连,并提取该局部的特征,一旦该局部特征被提取后,它与其它特征间的位置关系也随之确定下来;其二是特征映射层,网络的每个计算层由多个特征映射组成,每个特征映射是一个平面,平面上所有神经元的权值相等。
进一步地,所述图像焦距检测方法还包括:
步骤a,获取与所述图像检测模型所能检测的目标图像对应的预设数据;
步骤b,根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
在使用所述图像检测模型之前,要先训练所述图像检测模型中的区域生成网络,即优化所述图像检测模型。首先对所述图像检测模型中区域生成网络进行训练,具体过程为:在所述区域生成网络中输入与所述图像检测模型所检测的目标图像对应的图片,即获取与所述图像检测模型所能检测的目标图像对应的预设数据。可以理解的是,预设数据是与所述目标图像对应的图片。在得到与所述目标图像对应的图片后,根据与所述目标图像对应的图片测试所述区域生成网络,得到测试结果,根据所述测试结果调整所述区域生成网络,得到调整后的所述区域生成网络。在本实施例中,为了减小训练所述区域生成网络的时间,可先对所述区域生成网络初始化。
步骤c,通过调整后的所述区域生成网络生成目标区域训练数据;
步骤d,根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
步骤e,确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
当得到调整后的所述区域生成网络之后,通过输入所述区域生成网络中的图片在调整后的所述区域生成网络中生成目标区域训练数据,根据所述目标区域训练数据测试所述图像检测模型的目标检测网络,得到测试结果,根据所述测试结构优化所述目标检测网络。当优化所述目标检测网络之后,获取优化后的所述目标检测网络的特征提取层,通过所述目标检测网络的特征提取层初始化所述区域生成网络的特征提取层,固定所述区域生成网络的特征提取层。当固定住所述区域生成网络的特征提取层时,将所述区域生成网 络的特征提取层复制到所述目标检测网络中,以固定所述目标检测网络和所述区域生成网络共享的特征提取层。可以理解的是,所述区域生成网络和所述目标检测网络共享特征提取层,即共享多层卷积层。在训练所述区域生成网络和所述目标检测网络过程中,对所述区域生成网络和所述目标检测网络交替优化。
步骤S20,计算所述目标区域在所述待检测图片中所占的面积比例。
当确定所述待检测图片中的目标图像所在的目标区域后,计算所述目标区域的面积和所述待检测图片的面积,将所述目标区域的面积除以所述待检测图片的面积,得到所述目标区域在所述待检测图片中所占的面积比例。
步骤S30,根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
当确定所述目标区域在所述待检测图片中所占的面积比例后,根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
进一步地,步骤S30包括:
步骤f,判断所述面积比例是否小于预设阈值;
步骤g,若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;
步骤h,若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求的具体过程为:判断所述面积比例是否小于预设阈值,其中,所述预设阈值为根据具体需要而设置,如可设置为0.05,0.08,或者0.10等。当所述面积比例小于所述预设阈值时,确定拍摄所述待检测图片的焦距不符合拍摄求;当所述面积比例大于或者等于所述预设阈值时,确定拍摄所述待检测图片的焦距符合拍摄要求。进一步地,当确定拍摄所述待检测图片的焦距不符合拍摄求时,输出提示信息,提示用户拍摄所述待检测图片的距离过远,需要重新拍摄所述待检测图片;当确定拍摄所述待检测图片的焦距符合拍摄要求时,输出提示信息,提示用户所拍摄的所述待检测图片符合拍摄要求,并存储所述待检测图片。
本实施例通过预置的图像检测模型确定所获取的待检测图片中目标图像 所在的目标区域,计算所述目标区域在所述待检测图片中所占的面积比例,根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。实现了自动筛选拍摄过程中焦距不符合要求的图片,降低了筛选拍摄过程中焦距不符合要求的图片的难度。
进一步地,基于本发明图像焦距检测方法的较佳实施例提出本发明的另一实施例,参照图2,在本实施例中,所述步骤S10包括:
步骤S11,获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
步骤S12,将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
当获取到已拍摄的所述待检测图片时,将所述待检测图片载入所述图像检测模型的区域生成网络中,通过所述区域生成网络确定所述待检测图片中的候选区域。将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。在本实施例中,所述待检测图片中的候选区域有一个或者多个,且每个候选区域中可能存在所述目标图像。进一步地,所述候选区域的形状为矩形,所述区域生成网络是卷积神经网络。为了提高确定所述目标图像所在的目标区域的速度,所述区域生成网络和所述目标检测网络共享特征提取层,
确定所述目标图像所在的目标区域的具体过程为:在所述区域生成网络的最后一个卷积层中输出的特征映射上设置一个输入维度为nx n的小卷积网络。需要说明的是,所述卷积网络的维度nx n小于所述区域生成网络的最后一个卷积层的维度N x N(所述n和N为正整数),即将所述区域生成网络的最后一个卷积层所覆盖的区域映射到一个更低维度的特征映射层上。所述特征映射层与两个平行的完全连接层连接。在本实施例中,这两个完全连接层分别称为cls层和reg层。所述cls层用于确定所述候选区域中含有目标图像的可能性,即含有所述候选区域中含有目标图像的概率,所述reg层用于确定所述候选区域中所述目标图像所在的位置,以确定所述目标图像的尺寸和位移。如当所述小卷积网络维度设定为3x3时,所述区域生成网络是一个尺度 为3x3的卷积核,输出为256的卷积网络层,所述卷积网络层后面连接两个完全卷积层cls层和reg层。所述3x3的卷积核在每个位置上会以3种缩放和3种宽高模式共计9种模式生成所述候选区域,以确定载入所述目标检测网络中的候选区域中目标图像的尺寸和位移具有鲁棒性。
进一步地,所述步骤S12之前,还包括:
步骤i,获取所述图像检测模型中与所述目标图像对应的参考区域;
步骤j,计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
在确定所述候选区域后,获取所述图像检测模型中与所述目标图像对应的参考区域,所述参考区域由所述图像检测模型中所存储的标注信息确定。确定所述候选区域中目标图像所在的位置的左上角坐标和右下角坐标,以及确定所述参考区域中目标图像所在位置的左上角坐标和右下角坐标。根据所述候选区域中目标图像所在的位置的左上角坐标和右下角坐标确定所述目标图像在所述候选区域中的范围,记为第一范围;根据所述参考区域中目标图像所在位置的左上角坐标和右下角坐标确定所述目标图像在所述参考区域中的范围,记为第二范围。计算所述第一范围和所述第二范围之间的交集,以及计算所述第一范围和所述第二范围之间的并集,将所述交集除以所述并集,得到所述候选区域和所述参考区域中所述目标图像所在位置之间的误差。将所述误差与预设误差进行对比,判断所述误差是否大于预设误差。当所述误差大于或者等于所述预设误差时,表示所述候选区域含有所述目标图像;当所述误差小于所述预设误差时,表示所述候选区域不含有所述目标图像。其中,所述预设误差可根据具体需要而设置,在本实施例中,所述预设误差设置为0.7。
当得到所述误差后,根据所述误差,通过网络优化函数优化所述区域生成网络,具体是优化所述区域生成网络中的神经元。所述网络优化函数L为:
Figure PCTCN2017078002-appb-000001
其中,i是多个候选区域所组成的候选区域集合的索引,pi是第i个候选区域中存在目标图像的概率。
Figure PCTCN2017078002-appb-000002
表示所述候选区域是否含有所述目标图像,取值为0或者1,当取值为1时,表示所述候选区域含有所述目标图像,当取值为0时,表示所述候选区域不含有所述目标图像,由所述误差决定。ti是所 述区域生成网络预测的所述目标图像在所述候选区域中的坐标,形式上是一个4维向量。Dcls是输入所述候选区域中目标图像的数量,在本实施例中,Dcls=256,Dreg是所述候选区域做3种缩放和3种宽高比变换后得到的新候选区域的数量,在本实施例中,Dreg=256*9。λ被设置为10,以平衡确定所述候选区域和所述目标区域的重要程度。可以理解的是,在其它实施例中,所述Dcls、Dreg和λ可根据需要设置为其它值。
本实施例通过将所述待检测图片载入所述图像检测模型的区域生成网络和目标检测网络中,得到所述待检测网络中目标图像所在的目标区域,从而根据所述目标区域实现自动判断拍摄所述待检测图片的焦距是否满足拍摄要求。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
本发明进一步提供一种图像焦距检测装置100。
参照图3,图3为本发明图像焦距检测装置100较佳实施例的功能模块示意图。
需要强调的是,对本领域的技术人员来说,图3所示模块图仅仅是一个较佳实施例的示例图,本领域的技术人员围绕图3所示的图像焦距检测装置100的模块,可轻易进行新的模块的补充;各模块的名称是自定义名称,仅用于辅助理解该图像焦距检测装置100的各个程序功能块,不用于限定本发明的技术方案,本发明技术方案的核心是,各自定义名称的模块所要达成的功能。
在本实施例中,所述图像焦距检测装置100包括:
第一确定模块10,用于获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域。
当获取到已拍摄的待检测图片,需要确定拍摄所述待检测图片的焦距是否满足用户要求时,获取预先设置好与所述待检测图片对应的图像检测模型,通过所述检测模型确定所述待检测图片中目标图像所在的目标区域。需要说 明的是,所述目标图像是所述待检测图片中所要显示的主要物品,如当所述待检测图片所要显示的是一辆车,则所述待检测图片中的车就是所述目标图像。所述图像检测模型是预先设置好的,所述图像检测模型可检测一个目标图像,也可以检测多个目标图像。如所述图像检测模型可设置为只检测车的图像,或者设置为检测车和人的图像等。
进一步地,在设置所述图像检测模型过程中,先收集所述图像检测模型所要检测的目标图像所对应的图片集合,其中,所述图片集合中包含了多张同一目标图像的图片,如有10张含有汽车的图片。对所述图片集合中的目标图像进行标注,得到所述图片集合的标注信息,将每张图片的标注信息以列表的形式存储在同一个文件夹中。在所述文件夹中,每一条记录对应着一张图片的标记信息。需要说明的是,所述文件夹的第一列是所述图片集合中每张图片完整的存储路径;第二列为每张图片中目标图像的个数,如一张图片中可能有一辆车或者多辆车;第二列后面的列表示所述图片集合中每张图片中目标图像标注的区域,即所述目标图像在所述图片中的坐标,如用左上角的坐标topLeft_x和topLeft_y,以及右下角坐标bottomRight_x和bottomRight_y表示。可以理解的是,若某张图片中的目标图像的个数大于1,那么该张图片对应着多个左上角坐标和多个右下角坐标。如果所述文件夹中第二列的数目大于或者等于1,那么所述列表的第二列之后至少会存在4列数目,且所述第二列后面的列的数目一定是4的倍数。
所述图像检测模型包括两部分,第一部分为区域生成网络,用于生成所述待检测图片中目标图像所在的候选区域,所述候选区域为在所述待检测图片中,所述目标图像可能存在的矩形区域;第二部分是目标检测网络,用于在所述候选区域中确定所述目标图像所在的目标区域。需要说明的是,所述区域生成网络是一个深度全卷积神经网络。所述卷积神经网络是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,对于大型图像处理有出色表现。所述卷积神经网络的基本结构包括两层,其一为特征提取层,每个神经元的输入与前一层的局部接受域相连,并提取该局部的特征,一旦该局部特征被提取后,它与其它特征间的位置关系也随之确定下来;其二是特征映射层,网络的每个计算层由多个特征映射组成,每个特征映射是一个平面,平面上所有神经元的权值相等。
进一步地,所述图像焦距检测装置100还包括:
获取模块,用于获取与所述图像检测模型所能检测的目标图像对应的预设数据;
调整模块,用于根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
在使用所述图像检测模型之前,要先训练所述图像检测模型中的区域生成网络,即优化所述图像检测模型。首先对所述图像检测模型中区域生成网络进行训练,具体过程为:在所述区域生成网络中输入与所述图像检测模型所检测的目标图像对应的图片,即获取与所述图像检测模型所能检测的目标图像对应的预设数据。可以理解的是,预设数据是与所述目标图像对应的图片。在得到与所述目标图像对应的图片后,根据与所述目标图像对应的图片测试所述区域生成网络,得到测试结果,根据所述测试结果调整所述区域生成网络,得到调整后的所述区域生成网络。在本实施例中,为了减小训练所述区域生成网络的时间,可先对所述区域生成网络初始化。
生成模块,用于通过调整后的所述区域生成网络生成目标区域训练数据;
优化模块,用于根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
第三确定模块,用于确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
当得到调整后的所述区域生成网络之后,通过输入所述区域生成网络中的图片在调整后的所述区域生成网络中生成目标区域训练数据,根据所述目标区域训练数据测试所述图像检测模型的目标检测网络,得到测试结果,根据所述测试结构优化所述目标检测网络。当优化所述目标检测网络之后,获取优化后的所述目标检测网络的特征提取层,通过所述目标检测网络的特征提取层初始化所述区域生成网络的特征提取层,固定所述区域生成网络的特征提取层。当固定住所述区域生成网络的特征提取层时,将所述区域生成网络的特征提取层复制到所述目标检测网络中,以固定所述目标检测网络和所述区域生成网络共享的特征提取层。可以理解的是,所述区域生成网络和所述目标检测网络共享特征提取层,即共享多层卷积层。在训练所述区域生成网络和所述目标检测网络过程中,对所述区域生成网络和所述目标检测网络 交替优化。
计算模块20,用于计算所述目标区域在所述待检测图片中所占的面积比例。
当确定所述待检测图片中的目标图像所在的目标区域后,计算所述目标区域的面积和所述待检测图片的面积,将所述目标区域的面积除以所述待检测图片的面积,得到所述目标区域在所述待检测图片中所占的面积比例。
第二确定模块30,用于根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
当确定所述目标区域在所述待检测图片中所占的面积比例后,根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
进一步地,所述第二确定模块30包括:
判断单元,用于判断所述面积比例是否小于预设阈值;
确定单元,用于若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求的具体过程为:判断所述面积比例是否小于预设阈值,其中,所述预设阈值为根据具体需要而设置,如可设置为0.05,0.08,或者0.10等。当所述面积比例小于所述预设阈值时,确定拍摄所述待检测图片的焦距不符合拍摄求;当所述面积比例大于或者等于所述预设阈值时,确定拍摄所述待检测图片的焦距符合拍摄要求。进一步地,当确定拍摄所述待检测图片的焦距不符合拍摄求时,输出提示信息,提示用户拍摄所述待检测图片的距离过远,需要重新拍摄所述待检测图片;当确定拍摄所述待检测图片的焦距符合拍摄要求时,输出提示信息,提示用户所拍摄的所述待检测图片符合拍摄要求,并存储所述待检测图片。
本实施例通过预置的图像检测模型确定所获取的待检测图片中目标图像所在的目标区域,计算所述目标区域在所述待检测图片中所占的面积比例,根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。实现了自动筛选拍摄过程中焦距不符合要求的图片,降低了筛选拍摄过程中焦距不符合要求的图片的难度。
进一步地,基于本发明图像焦距检测装置100的较佳实施例提出本发明的另一实施例,在本实施例中,所述第一确定模块10还用于获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
当获取到已拍摄的所述待检测图片时,将所述待检测图片载入所述图像检测模型的区域生成网络中,通过所述区域生成网络确定所述待检测图片中的候选区域。将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。在本实施例中,所述待检测图片中的候选区域有一个或者多个,且每个候选区域中可能存在所述目标图像。进一步地,所述候选区域的形状为矩形,所述区域生成网络是卷积神经网络。为了提高确定所述目标图像所在的目标区域的速度,所述区域生成网络和所述目标检测网络共享特征提取层,
确定所述目标图像所在的目标区域的具体过程为:在所述区域生成网络的最后一个卷积层中输出的特征映射上设置一个输入维度为nx n的小卷积网络。需要说明的是,所述卷积网络的维度nx n小于所述区域生成网络的最后一个卷积层的维度N x N(所述n和N为正整数),即将所述区域生成网络的最后一个卷积层所覆盖的区域映射到一个更低维度的特征映射层上。所述特征映射层与两个平行的完全连接层连接。在本实施例中,这两个完全连接层分别称为cls层和reg层。所述cls层用于确定所述候选区域中含有目标图像的可能性,即含有所述候选区域中含有目标图像的概率,所述reg层用于确定所述候选区域中所述目标图像所在的位置,以确定所述目标图像的尺寸和位移。如当所述小卷积网络维度设定为3x3时,所述区域生成网络是一个尺度为3x3的卷积核,输出为256的卷积网络层,所述卷积网络层后面连接两个完全卷积层cls层和reg层。所述3x3的卷积核在每个位置上会以3种缩放和3种宽高模式共计9种模式生成所述候选区域,以确定载入所述目标检测网络中的候选区域中目标图像的尺寸和位移具有鲁棒性。
进一步地,所述第一确定模块10包括:
获取单元,用于获取所述图像检测模型中与所述目标图像对应的参考区域;
优化单元,用于计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
在确定所述候选区域后,获取所述图像检测模型中与所述目标图像对应的参考区域,所述参考区域由所述图像检测模型中所存储的标注信息确定。确定所述候选区域中目标图像所在的位置的左上角坐标和右下角坐标,以及确定所述参考区域中目标图像所在位置的左上角坐标和右下角坐标。根据所述候选区域中目标图像所在的位置的左上角坐标和右下角坐标确定所述目标图像在所述候选区域中的范围,记为第一范围;根据所述参考区域中目标图像所在位置的左上角坐标和右下角坐标确定所述目标图像在所述参考区域中的范围,记为第二范围。计算所述第一范围和所述第二范围之间的交集,以及计算所述第一范围和所述第二范围之间的并集,将所述交集除以所述并集,得到所述候选区域和所述参考区域中所述目标图像所在位置之间的误差。将所述误差与预设误差进行对比,判断所述误差是否大于预设误差。当所述误差大于或者等于所述预设误差时,表示所述候选区域含有所述目标图像;当所述误差小于所述预设误差时,表示所述候选区域不含有所述目标图像。其中,所述预设误差可根据具体需要而设置,在本实施例中,所述预设误差设置为0.7。
当得到所述误差后,根据所述误差,通过网络优化函数优化所述区域生成网络,具体是优化所述区域生成网络中的神经元。所述网络优化函数L为:
Figure PCTCN2017078002-appb-000003
其中,i是多个候选区域所组成的候选区域集合的索引,pi是第i个候选区域中存在目标图像的概率。
Figure PCTCN2017078002-appb-000004
表示所述候选区域是否含有所述目标图像,取值为0或者1,当取值为1时,表示所述候选区域含有所述目标图像,当取值为0时,表示所述候选区域不含有所述目标图像,由所述误差决定。ti是所述区域生成网络预测的所述目标图像在所述候选区域中的坐标,形式上是一个4维向量。Dcls是输入所述候选区域中目标图像的数量,在本实施例中,Dcls=256,Dreg是所述候选区域做3种缩放和3种宽高比变换后得到的新候选区域的数量,在本实施例中,Dreg=256*9。λ被设置为10,以平衡确定所述 候选区域和所述目标区域的重要程度。可以理解的是,在其它实施例中,所述Dcls、Dreg和λ可根据需要设置为其它值。
本实施例通过将所述待检测图片载入所述图像检测模型的区域生成网络和目标检测网络中,得到所述待检测网络中目标图像所在的目标区域,从而根据所述目标区域实现自动判断拍摄所述待检测图片的焦距是否满足拍摄要求。
需要说明的是,在硬件实现上,以上第一确定模块10、计算模块20以及第二确定模块30等可以以硬件形式内嵌于或独立于图像焦距检测装置中,也可以以软件形式存储于图像焦距检测装置的存储器中,以便于处理器调用执行以上各个模块对应的操作。该处理器可以为中央处理单元(CPU)、微处理器、单片机等。
参照图4,图4是本发明实施例方案涉及的硬件运行环境的设备结构示意图。
本发明实施例图像焦距检测设备可以是PC,也可以是智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面3)播放器、便携计算机等终端设备。
如图4所示,该图像焦距检测设备可以包括:处理器1001,例如CPU,以及存储器1005。其中,处理器1001和存储器1005可以通过通信总线1002实现通信连接。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
可选地,该图像焦距检测设备还可以包括用户接口、网络接口、摄像头、RF(Radio Frequency,射频)电、传感器、音频电路、WiFi模块等等。用户接口可以包括显示屏(Display)、输入单元比如键盘(Keyboard),用户接口还可以包括标准的有线接口、无线接口。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。在本实施例中,可通过摄像头获取待检测图片。
本领域技术人员可以理解,图4中示出的图像焦距检测设备结构并不构成对图像焦距检测设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图4所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块以及图像焦距检测程序。操作系统是管理和控制图像焦距检测设备硬件和软件资源的程序,支持图像焦距检测程序以及其它软件和/或程序的运行。网络通信模块用于实现存储器1005与图像焦距检测设备中其它硬件和软件之间通信。
在图4所示的图像焦距检测设备中,处理器1001可以用于执行存储器1005中存储的图像焦距检测程序,实现以下步骤:
获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
计算所述目标区域在所述待检测图片中所占的面积比例;
根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
进一步地,所述处理器1001还用于执行所述图像焦距检测程序,以实现以下步骤:
获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
进一步地,所述处理器1001还用于执行所述图像焦距检测程序,以实现以下步骤:
获取所述图像检测模型中与所述目标图像对应的参考区域;
计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
进一步地,所述处理器1001还用于执行所述图像焦距检测程序,以实现以下步骤:
判断所述面积比例是否小于预设阈值;
若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距 不符合拍摄要求;
若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
进一步地,所述处理器1001还用于执行所述图像焦距检测程序,以实现以下步骤:
获取与所述图像检测模型所能检测的目标图像对应的预设数据;
根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
通过调整后的所述区域生成网络生成目标区域训练数据;
根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
本发明图像焦距检测设备具体实施方式与上述图像焦距检测方法和装置各实施例基本相同,在此不再赘述。
本发明提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现以下步骤:
获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
计算所述目标区域在所述待检测图片中所占的面积比例;
根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
进一步地,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
进一步地,所述一个或者多个程序可被所述一个或者多个处理器执行, 以实现以下步骤:
获取所述图像检测模型中与所述目标图像对应的参考区域;
计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
进一步地,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
判断所述面积比例是否小于预设阈值;
若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;
若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
进一步地,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
获取与所述图像检测模型所能检测的目标图像对应的预设数据;
根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
通过调整后的所述区域生成网络生成目标区域训练数据;
根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
本发明计算机可读存储介质具体实施方式与上述图像焦距检测方法和装置各实施例基本相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其它变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其它要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种图像焦距检测方法,其特征在于,所述图像焦距检测方法包括:
    获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
    计算所述目标区域在所述待检测图片中所占的面积比例;
    根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
  2. 如权利要求1所述的图像焦距检测方法,其特征在于,所述获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域的步骤包括:
    获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
    将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
  3. 如权利要求2所述的图像焦距检测方法,其特征在于,所述将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域的步骤之前,还包括:
    获取所述图像检测模型中与所述目标图像对应的参考区域;
    计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
  4. 如权利要求1所述的图像焦距检测方法,其特征在于,所述根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求的步骤包括:
    判断所述面积比例是否小于预设阈值;
    若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;
    若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图 片的焦距符合拍摄要求。
  5. 如权利要求1所述的图像焦距检测方法,其特征在于,所述获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域的步骤之前,还包括:
    获取与所述图像检测模型所能检测的目标图像对应的预设数据;
    根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
    通过调整后的所述区域生成网络生成目标区域训练数据;
    根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
    确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
  6. 一种图像焦距检测装置,其特征在于,所述图像焦距检测装置包括:
    第一确定模块,用于获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
    计算模块,用于计算所述目标区域在所述待检测图片中所占的面积比例;
    第二确定模块,用于根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
  7. 如权利要求6所述的图像焦距检测装置,其特征在于,所述第一确定模块还用于获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
  8. 如权利要求7所述的图像焦距检测装置,其特征在于,所述第一确定模块包括:
    获取单元,用于获取所述图像检测模型中与所述目标图像对应的参考区 域;
    优化单元,用于计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
  9. 如权利要求6所述的图像焦距检测装置,其特征在于,所述第二确定模块包括:
    判断单元,用于判断所述面积比例是否小于预设阈值;
    确定单元,用于若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
  10. 如权利要求6所述的图像焦距检测装置,其特征在于,所述图像焦距检测装置还包括:
    获取模块,用于获取与所述图像检测模型所能检测的目标图像对应的预设数据;
    调整模块,用于根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
    生成模块,用于通过调整后的所述区域生成网络生成目标区域训练数据;
    优化模块,用于根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
    第三确定模块,用于确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
  11. 一种图像焦距检测设备,其特征在于,所述图像焦距检测设备包括处理器和存储器;
    所述处理器用于执行存储器中存储的图像焦距检测程序,以实现以下步骤:
    获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
    计算所述目标区域在所述待检测图片中所占的面积比例;
    根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
  12. 如权利要求11所述的图像焦距检测设备,其特征在于,所述处理器还用于执行所述图像焦距检测程序,以实现以下步骤:
    获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
    将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
  13. 如权利要求12所述的图像焦距检测设备,其特征在于,所述处理器还用于执行所述图像焦距检测程序,以实现以下步骤:
    获取所述图像检测模型中与所述目标图像对应的参考区域;
    计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
  14. 如权利要求11所述的图像焦距检测设备,其特征在于,所述处理器还用于执行所述图像焦距检测程序,以实现以下步骤:
    判断所述面积比例是否小于预设阈值;
    若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距不符合拍摄要求;
    若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
  15. 如权利要求11所述的图像焦距检测设备,其特征在于,所述处理器还用于执行所述图像焦距检测程序,以实现以下步骤:
    获取与所述图像检测模型所能检测的目标图像对应的预设数据;
    根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
    通过调整后的所述区域生成网络生成目标区域训练数据;
    根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
    确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现以下步骤:
    获取已拍摄的待检测图片,通过预置的图像检测模型确定所述待检测图片中目标图像所在的目标区域;
    计算所述目标区域在所述待检测图片中所占的面积比例;
    根据所述面积比例确定拍摄所述待检测图片的焦距是否符合拍摄要求。
  17. 如权利要求16所述的计算机可读存储介质,其特征在于,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
    获取已拍摄的所述待检测图片,将所述待检测图片载入所述图像检测模型的区域生成网络中,以确定所述目标图像在所述待检测图片中的候选区域,其中,所述区域生成网络是卷积神经网络;
    将所述候选区域载入所述图像检测模型的目标检测网络中,以确定所述候选区域中的所述目标图像所在的目标区域。
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
    获取所述图像检测模型中与所述目标图像对应的参考区域;
    计算所述候选区域和所述参考区域中所述目标图像所在位置的误差,根据所述误差,通过网络优化函数优化所述区域生成网络。
  19. 如权利要求16所述的计算机可读存储介质,其特征在于,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
    判断所述面积比例是否小于预设阈值;
    若所述面积比例小于所述预设阈值,则确定拍摄所述待检测图片的焦距 不符合拍摄要求;
    若所述面积比例大于或者等于所述预设阈值,则确定拍摄所述待检测图片的焦距符合拍摄要求。
  20. 如权利要求16所述的计算机可读存储介质,其特征在于,所述一个或者多个程序可被所述一个或者多个处理器执行,以实现以下步骤:
    获取与所述图像检测模型所能检测的目标图像对应的预设数据;
    根据所述预设数据调整所述图像检测模型的区域生成网络,得到调整后的所述区域生成网络;
    通过调整后的所述区域生成网络生成目标区域训练数据;
    根据所述目标区域训练数据优化所述图像检测模型的目标检测网络;
    确定所述区域生成网络和所述目标检测网络共享的特征提取层,固定所述特征提取层。
PCT/CN2017/078002 2016-12-28 2017-03-24 图像焦距检测方法、装置、设备及计算机可读存储介质 WO2018120460A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611240404.3A CN106686308B (zh) 2016-12-28 2016-12-28 图像焦距检测方法和装置
CN201611240404.3 2016-12-28

Publications (1)

Publication Number Publication Date
WO2018120460A1 true WO2018120460A1 (zh) 2018-07-05

Family

ID=58873212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/078002 WO2018120460A1 (zh) 2016-12-28 2017-03-24 图像焦距检测方法、装置、设备及计算机可读存储介质

Country Status (3)

Country Link
CN (1) CN106686308B (zh)
TW (1) TWI658730B (zh)
WO (1) WO2018120460A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110456829A (zh) * 2019-08-07 2019-11-15 深圳市维海德技术股份有限公司 定位跟踪方法、装置及计算机可读存储介质
CN110770739A (zh) * 2018-10-31 2020-02-07 深圳市大疆创新科技有限公司 一种基于图像识别的控制方法、装置及控制设备
CN110989344A (zh) * 2019-11-27 2020-04-10 云南电网有限责任公司电力科学研究院 一种巡检机器人预置参数自动调整方法及系统
CN111626995A (zh) * 2020-05-19 2020-09-04 上海艾豚科技有限公司 一种针对工件的智能嵌件检测方法和装置
CN113096077A (zh) * 2021-03-25 2021-07-09 深圳力维智联技术有限公司 异常比例检测方法、装置、设备及计算机可读存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108848301A (zh) * 2018-05-23 2018-11-20 阿里巴巴集团控股有限公司 一种票据拍摄交互方法、装置、处理设备及客户端
CN110248096B (zh) 2019-06-28 2021-03-12 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质
CN110276767B (zh) 2019-06-28 2021-08-31 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN110267041B (zh) 2019-06-28 2021-11-09 Oppo广东移动通信有限公司 图像编码方法、装置、电子设备和计算机可读存储介质
CN110660090B (zh) 2019-09-29 2022-10-25 Oppo广东移动通信有限公司 主体检测方法和装置、电子设备、计算机可读存储介质
CN110796041B (zh) 2019-10-16 2023-08-18 Oppo广东移动通信有限公司 主体识别方法和装置、电子设备、计算机可读存储介质
CN113242387B (zh) * 2021-06-11 2022-05-03 广州立景创新科技有限公司 相机模块、对焦调整系统及对焦方法
TWI774418B (zh) * 2021-06-11 2022-08-11 大陸商廣州立景創新科技有限公司 相機模組、對焦調整系統及對焦方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020947A (zh) * 2011-09-23 2013-04-03 阿里巴巴集团控股有限公司 一种图像的质量分析方法及装置
CN105096350A (zh) * 2014-05-21 2015-11-25 腾讯科技(深圳)有限公司 图像检测方法及装置
CN105915791A (zh) * 2016-05-03 2016-08-31 广东欧珀移动通信有限公司 电子装置控制方法及装置、电子装置
CN106156749A (zh) * 2016-07-25 2016-11-23 福建星网锐捷安防科技有限公司 基于选择性搜索的人脸检测方法及装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100064533A (ko) * 2008-12-05 2010-06-15 삼성전자주식회사 카메라를 이용한 문자 크기 자동 조절 장치 및 방법
JP2015522959A (ja) * 2012-04-26 2015-08-06 ザ・トラスティーズ・オブ・コロンビア・ユニバーシティ・イン・ザ・シティ・オブ・ニューヨーク 画像において対話式再合焦を提供するためのシステム、方法、及び媒体
CN103810696B (zh) * 2012-11-15 2017-03-22 浙江大华技术股份有限公司 一种目标对象图像检测方法及装置
CN103024165B (zh) * 2012-12-04 2015-01-28 华为终端有限公司 一种自动设置拍摄模式的方法和装置
US9672416B2 (en) * 2014-04-29 2017-06-06 Microsoft Technology Licensing, Llc Facial expression tracking
CN105654451A (zh) * 2014-11-10 2016-06-08 中兴通讯股份有限公司 一种图像的处理方法和装置
CN105894458A (zh) * 2015-12-08 2016-08-24 乐视移动智能信息技术(北京)有限公司 一种具有人脸的图像处理方法和装置
CN106250931A (zh) * 2016-08-03 2016-12-21 武汉大学 一种基于随机卷积神经网络的高分辨率图像场景分类方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020947A (zh) * 2011-09-23 2013-04-03 阿里巴巴集团控股有限公司 一种图像的质量分析方法及装置
CN105096350A (zh) * 2014-05-21 2015-11-25 腾讯科技(深圳)有限公司 图像检测方法及装置
CN105915791A (zh) * 2016-05-03 2016-08-31 广东欧珀移动通信有限公司 电子装置控制方法及装置、电子装置
CN106156749A (zh) * 2016-07-25 2016-11-23 福建星网锐捷安防科技有限公司 基于选择性搜索的人脸检测方法及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110770739A (zh) * 2018-10-31 2020-02-07 深圳市大疆创新科技有限公司 一种基于图像识别的控制方法、装置及控制设备
CN110456829A (zh) * 2019-08-07 2019-11-15 深圳市维海德技术股份有限公司 定位跟踪方法、装置及计算机可读存储介质
CN110456829B (zh) * 2019-08-07 2022-12-13 深圳市维海德技术股份有限公司 定位跟踪方法、装置及计算机可读存储介质
CN110989344A (zh) * 2019-11-27 2020-04-10 云南电网有限责任公司电力科学研究院 一种巡检机器人预置参数自动调整方法及系统
CN111626995A (zh) * 2020-05-19 2020-09-04 上海艾豚科技有限公司 一种针对工件的智能嵌件检测方法和装置
CN111626995B (zh) * 2020-05-19 2024-03-01 上海艾豚科技有限公司 一种针对工件的智能嵌件检测方法和装置
CN113096077A (zh) * 2021-03-25 2021-07-09 深圳力维智联技术有限公司 异常比例检测方法、装置、设备及计算机可读存储介质
CN113096077B (zh) * 2021-03-25 2024-05-03 深圳力维智联技术有限公司 异常比例检测方法、装置、设备及计算机可读存储介质

Also Published As

Publication number Publication date
TWI658730B (zh) 2019-05-01
CN106686308B (zh) 2018-02-16
TW201841491A (zh) 2018-11-16
CN106686308A (zh) 2017-05-17

Similar Documents

Publication Publication Date Title
WO2018120460A1 (zh) 图像焦距检测方法、装置、设备及计算机可读存储介质
CN109961009B (zh) 基于深度学习的行人检测方法、系统、装置及存储介质
JP7085062B2 (ja) 画像セグメンテーション方法、装置、コンピュータ機器およびコンピュータプログラム
US9697416B2 (en) Object detection using cascaded convolutional neural networks
CN109815770B (zh) 二维码检测方法、装置及系统
CN108875723B (zh) 对象检测方法、装置和系统及存储介质
CN109376631B (zh) 一种基于神经网络的回环检测方法及装置
US10291838B2 (en) Focusing point determining method and apparatus
JP2017130929A (ja) 撮像装置により取得された文書画像の補正方法及び補正装置
CN111160288A (zh) 手势关键点检测方法、装置、计算机设备和存储介质
US11462018B2 (en) Representative image generation
CN111814905A (zh) 目标检测方法、装置、计算机设备和存储介质
CN114267041B (zh) 场景中对象的识别方法及装置
WO2022002262A1 (zh) 基于计算机视觉的字符序列识别方法、装置、设备和介质
CN111080571A (zh) 摄像头遮挡状态检测方法、装置、终端和存储介质
CN110633712A (zh) 一种车身颜色识别方法、系统、装置及计算机可读介质
TWI546743B (zh) 於一影像中之物件選擇技術
CN110956131B (zh) 单目标追踪方法、装置及系统
US10089764B2 (en) Variable patch shape synthesis
US8218823B2 (en) Determining main objects using range information
CN110827194A (zh) 图像处理的方法、装置及计算机存储介质
CN109214271B (zh) 用于重识别的损失函数确定的方法及装置
WO2020082686A1 (zh) 图像处理方法、装置及计算机可读存储介质
CN106126686B (zh) 一种图片的组合方法、装置及电子设备
US9811740B2 (en) Computer-readable medium storing therein image processing program, image processing device, and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17886938

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.09.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17886938

Country of ref document: EP

Kind code of ref document: A1