CN113676624A - Image acquisition method, device, equipment and storage medium - Google Patents

Image acquisition method, device, equipment and storage medium Download PDF

Info

Publication number
CN113676624A
CN113676624A CN202110735463.2A CN202110735463A CN113676624A CN 113676624 A CN113676624 A CN 113676624A CN 202110735463 A CN202110735463 A CN 202110735463A CN 113676624 A CN113676624 A CN 113676624A
Authority
CN
China
Prior art keywords
image
original image
abnormal type
determining
acquisition device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110735463.2A
Other languages
Chinese (zh)
Inventor
聂泳忠
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiren Ma Diyan Beijing Technology Co ltd
Original Assignee
Xiren Ma Diyan Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiren Ma Diyan Beijing Technology Co ltd filed Critical Xiren Ma Diyan Beijing Technology Co ltd
Priority to CN202110735463.2A priority Critical patent/CN113676624A/en
Publication of CN113676624A publication Critical patent/CN113676624A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • G01P15/02Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by making use of inertia forces using solid seismic masses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses an image acquisition method, an image acquisition device, image acquisition equipment and a storage medium. The method comprises the steps of obtaining an original image collected by an image collecting device and the acceleration of an inertial sensor in a preset direction, wherein the image collecting device and the inertial sensor are positioned on the same electronic equipment; determining the abnormal type of the original image according to the image information of the original image or the acceleration of the inertial sensor in the preset direction; and executing the operation corresponding to the abnormal type to obtain the target image, thereby improving the quality of the target image.

Description

Image acquisition method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image acquisition technologies, and in particular, to an image acquisition method, an image acquisition apparatus, an image acquisition device, and a storage medium.
Background
The visual positioning technology is a technology for positioning based on currently acquired images, and is widely applied to the fields of unmanned driving, robots and the like at present.
Reliable visual localization relies on high quality images, but in reality some external environments degrade the quality of the image, for example, rapid motion tends to blur the image, illumination changes tend to overexpose or underexpose the image, etc. It is difficult to extract sufficient features from these poor quality images, resulting in less accurate, or even failure, visual positioning.
Disclosure of Invention
The embodiment of the invention provides an image acquisition method, an image acquisition device, image acquisition equipment and a storage medium, which can improve the quality of images.
In a first aspect, an embodiment of the present invention provides an image obtaining method, including:
acquiring an original image acquired by an image acquisition device and the acceleration of an inertial sensor in a preset direction, wherein the image acquisition device and the inertial sensor are positioned on the same electronic equipment;
determining the abnormal type of the original image according to the image information of the original image or the acceleration of the inertial sensor in the preset direction;
and executing the operation corresponding to the abnormal type to obtain the target image.
In a second aspect, an embodiment of the present invention provides an image capturing apparatus, including:
the first acquisition module is used for acquiring an original image acquired by the image acquisition device and the acceleration of the inertial sensor in a preset direction, and the image acquisition device and the inertial sensor are positioned on the same electronic equipment;
the abnormal type determining module is used for determining the abnormal type of the original image according to the image information of the original image or the acceleration of the inertial sensor in the preset direction;
and the second acquisition module is used for executing the operation corresponding to the abnormal type to obtain the target image.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
the image acquisition device is used for acquiring images;
an inertial sensor for measuring acceleration;
a processor;
a memory for storing computer program instructions;
the computer program instructions, when executed by a processor, implement the method as described in the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method according to the first aspect.
According to the image acquisition method, the image acquisition device, the image acquisition equipment and the storage medium, the abnormal type of the original image is determined according to the acquired image information of the original image or the acceleration of the inertial sensor in the preset direction, the operation corresponding to the abnormal type is executed, the target image is obtained, and the quality of the target image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an image acquisition method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another image acquisition method provided by the embodiment of the invention;
FIG. 3 is a flow chart of another image acquisition method provided by the embodiment of the invention;
FIG. 4 is a block diagram of an image capturing apparatus according to an embodiment of the present invention;
fig. 5 is a structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The image applied to the visual positioning is mainly obtained under the normal condition at present, and the visual positioning can be carried out through simple image processing. The normal condition here is usually that the device that acquired the image is in low speed motion, or that the illumination changes slowly, etc.
When the device for acquiring the images moves rapidly, or the illumination changes rapidly, or is in abnormal weather such as rainy days and foggy days, the quality of the acquired images is usually low, so that the accuracy of visual positioning is influenced, and even the positioning fails.
Therefore, the embodiment of the invention provides an image acquisition method, which can improve the image quality under the abnormal condition, and further improve the accuracy of visual positioning under the abnormal condition. In addition, according to the image acquiring method provided by the embodiment of the present invention, the executing body may be an image acquiring apparatus, or an acquiring module in the image acquiring apparatus for executing the image acquiring method. The embodiment of the present invention takes an image acquisition apparatus executing an image acquisition method as an example, and details an image acquisition method provided by the embodiment of the present invention are described.
Fig. 1 is a flowchart of an image obtaining method according to an embodiment of the present invention.
As shown in fig. 1, the image acquisition method may include S110-S130 as shown below.
And S110, acquiring an original image acquired by the image acquisition device and the acceleration of the inertial sensor in a preset direction.
The image capturing device may be a device or apparatus with image capturing function, such as a camera.
The raw image may be an unprocessed image acquired by the image acquisition device.
The inertial sensor may be a sensor for measuring acceleration, and the preset direction may be a preset direction, for example, the direction in which the inertial sensor is to obtain the acceleration may be preset.
The image acquisition device and the inertial sensor in the embodiment of the invention are positioned on the same electronic device, and the electronic device can be a device with an automatic driving function, such as a mobile robot, an automatic driving vehicle, an unmanned aerial vehicle and the like.
In practical application, the inertial sensor may be built in the image capturing device or may exist independently of the image capturing device, so that the acceleration measured by the inertial sensor is also the acceleration of the image capturing device or the electronic device.
In one embodiment, the image acquisition device may acquire the original image acquired by the image acquisition device in real time and acquire the acceleration of the inertial sensor in the preset direction in real time; it is also possible to acquire a pre-stored raw image acquired by the image acquisition device, and the acceleration of the inertial sensor in a preset direction.
And S120, determining the abnormal type of the original image according to the image information of the original image or the acceleration of the inertial sensor in the preset direction.
The image information of the original image may be information reflecting the characteristics of the original image, and may include, for example, the intensity value of each pixel.
The anomaly type may be a type that affects the quality of the original image, and may be, for example, underexposure, overexposure, blur, or the like of the original image.
In one embodiment, whether the original image is under-exposed or over-exposed can be determined according to the intensity value of each pixel point in the original image; or determining whether the original image is blurred according to the acceleration of the inertial sensor in the preset direction.
And S130, executing operation corresponding to the abnormal type to obtain a target image.
The operation corresponding to the abnormality type may be an operation capable of improving image quality to reduce the influence of the abnormality type on the image quality. The operation may be a software level operation or a hardware level operation.
The operation of the software layer can be, for example, image processing of the original image; the operation of the hardware level may be, for example, to improve the parameters of the image capturing device and to re-capture the image with the improved image capturing device.
The target image is an image obtained after an operation corresponding to the abnormality type is performed. The quality of the target image is higher than that of the original image.
In one embodiment, the target image may be utilized for visual localization, whereby the accuracy of visual localization may be improved. Of course, the target image may be applied to other aspects, for example, the target image may be applied to lane line detection, obstacle detection, and the like, so as to improve the accuracy of lane line detection and obstacle detection.
Therefore, the abnormal type of the original image is determined according to the acquired image information of the original image or the acceleration of the inertial sensor in the preset direction, and the operation corresponding to the abnormal type is executed to obtain the target image, so that the quality of the target image is improved.
Taking the application of the visual positioning field to the target image as an example, in one embodiment, after S130, the method may further include:
and carrying out visual positioning according to the target image to obtain a positioning result.
The visual positioning can be applied to mobile robots, unmanned driving, virtual reality or augmented reality and other scenes. And the target image is utilized to carry out visual positioning, so that the precision of the visual positioning can be improved.
In one embodiment, visual Localization may be achieved using Simultaneous Localization and Mapping (SLAM) techniques. Of course, other modes may also be adopted, and the mode of visual positioning is not particularly limited in the embodiment of the present invention.
Taking the application in the field of visual positioning as an example, in an embodiment, taking the determination of the abnormal type of the original image according to the acceleration of the inertial sensor in the preset direction as an example, as shown in fig. 2, the image acquisition method provided in the embodiment of the present invention may further include S210-S260 as shown below.
And S210, acquiring the acceleration of the inertial sensor in a preset direction.
And S220, determining the change rate of the acceleration in the set time.
When the carrier of the inertial sensor shakes violently, the original image collected by the image collecting device is easy to blur, and the accuracy of visual positioning is further influenced.
Whether the carrier of the inertial sensor vibrates violently or not can be detected according to the change rate of the acceleration within the set time, for example, whether the image acquisition device or the electronic equipment vibrates violently or not can be detected according to the embodiment of the invention.
Specifically, the change rate of the inertial sensor in the set time can be determined according to the acceleration of the inertial sensor in the set time, and the size of the set time can be set according to actual needs.
And S230, when the change rate is greater than or equal to a first set threshold, determining that the abnormal type of the original image is a first abnormal type, wherein the first abnormal type is used for indicating that the definition of the original image is less than a second set threshold.
When the change rate is larger than or equal to the first set threshold value, the image acquisition device can be determined to be severely vibrated. The change rate is equal to or greater than the first set threshold, and may be increased to be equal to or greater than the first set threshold in a short time, that is, the change rate is abruptly increased in a short time.
When the image acquisition device shakes violently, the definition of the acquired image is poor. The method and the device for processing the image are used for recording the type of the original image abnormity caused by the violent vibration of the image acquisition device as a first abnormity type, wherein the abnormity type is used for indicating that the definition of the original image is smaller than a second set threshold value, namely the original image is fuzzy. The sizes of the first set threshold and the second set threshold can be set according to actual needs.
S240, determining the first exposure time of the image acquisition device according to the change rate of the acceleration in the set time.
It can be understood that when the image capturing device is severely vibrated, the longer the exposure time of the image capturing device, the more blurred the captured original image. Therefore, the exposure time of the image acquisition device can be reduced, thereby reducing the blurring degree of the image.
In an embodiment, when the anomaly type of the original image is the first anomaly type, the first mapping relation table may be searched according to the change rate of the acceleration, and the exposure time corresponding to the change rate, that is, the first exposure time, may be obtained.
The first mapping relation table is used for storing the association relation between the acceleration change rate and the exposure time. The association relationship may be a functional relationship or a one-to-one numerical relationship, that is, one change rate corresponds to one exposure time. The correlation of the acceleration change rate with the exposure time may be determined in advance based on a large number of experiments. The exposure time of the image acquisition device can be rapidly determined through the first mapping relation table, and then a high-quality image can be rapidly acquired.
And S250, adjusting the image acquisition device according to the first exposure time, so that the image acquisition device acquires an image according to the first exposure time to obtain a target image.
Specifically, the exposure time of the image pickup device may be reduced to the first exposure time, causing the image pickup device to pick up an image based on the first exposure time, whereby the degree of blurring of the image may be reduced.
And S260, carrying out visual positioning according to the target image to obtain a positioning result.
Therefore, when the image acquisition device is determined to generate severe vibration based on the acceleration change rate of the inertial sensor, the exposure time of the image acquisition device can be reduced, the image blurring degree is reduced on hardware, and the precision of visual positioning is improved.
In one embodiment, when the image acquisition device shakes violently, the original image can be subjected to image processing from a software level, so that the blurring degree of the original image is reduced. For example, a blind deconvolution method may be used to deblur the original image, and the specific process is not limited in the embodiment of the present invention.
In an embodiment, taking the example of determining the abnormal type of the original image according to the image information of the original image, as shown in fig. 3, the image obtaining method provided by the embodiment of the present invention may further include S310-S3130 as shown below.
And S310, acquiring an original image acquired by the image acquisition device.
S320, generating an image histogram according to the original image, wherein the image histogram comprises the intensity value of each pixel point in the original image.
In scenes similar to in and out tunnels, over-exposure or under-exposure of the image often occurs, affecting visual localization. For this purpose, the illumination of the original image can be detected.
In one embodiment, the illumination of the original image can be determined by means of an image histogram. An image histogram is a technique for presenting image data in a manner of left-dark right-bright. The image histogram of the embodiment of the invention comprises the intensity values of all pixel points of the original image.
For example, when the intensity values of the pixel points are mainly distributed on the left side, it can be determined that the illumination corresponding to the original image is insufficient; when the intensity values of the pixel points are mainly distributed on the right side, the strong illumination corresponding to the original image can be determined; when the intensity values of the pixel points are reasonably distributed between the left side and the right side, the normal illumination corresponding to the original image can be determined.
S330, determining the distribution area of each pixel point according to the intensity value of each pixel point.
The intensity value can be used as an abscissa, the number of pixel points is used as an ordinate, and the number of pixel points of each intensity value in the original image is counted, so that the distribution area of each pixel point is determined. The representative intensity value at the leftmost side is the minimum, the corresponding light is the darkest, and the intensity value is larger towards the right, and the corresponding light is also stronger.
The regions may be divided according to the intensity values, and for example, a region having an intensity value smaller than a third set threshold value may be referred to as a first region, a region having an intensity value larger than a fifth set threshold value may be referred to as a second region, and a region having an intensity value between the third set threshold value and the fifth set threshold value may be referred to as a third region. The third setting threshold and the fifth setting threshold can be set according to actual needs.
Similarly, when the pixel points of the original image are mainly distributed in the second region, the pixel points of the original image are considered to be distributed in the second region, and when the pixel points of the original image are mainly distributed in the third region, the pixel points of the original image are considered to be distributed in the third region.
And S340, judging whether the pixel points are distributed in the first area, if so, executing S350, and otherwise, executing S360.
Here, the first region is a region in which the intensity value is smaller than the third set threshold value.
Fig. 3 is an example of first determining whether each pixel of the original image is distributed in the first region, and certainly, may also first determine whether each pixel of the original image is distributed in the second region or the third region.
And S350, determining the abnormal type of the original image as a second abnormal type, wherein the second abnormal type is used for indicating that the exposure of the original image is smaller than a fourth set threshold.
When it is determined that the pixel points of the original image are distributed in the first region, it may be determined that the exposure level of the original image is less than a fourth set threshold, that is, the exposure is insufficient, and the corresponding exception type is the second exception type. The size of the fourth setting threshold can be set according to actual needs.
S360, whether the pixels are distributed in the second area, if yes, perform S370, otherwise perform S3130.
Here, the second region is a region in which the intensity value is larger than a fifth set threshold value.
When determining that each pixel point of the original image is not in the first region, it may be further determined whether the pixel point is distributed in the second region, or certainly, it may also be further determined whether the pixel point is distributed in the third region, where the former is taken as an example in fig. 3.
And S370, determining the abnormal type of the original image as a third abnormal type, wherein the third abnormal type is used for indicating that the exposure of the original image is larger than a sixth set threshold.
When it is determined that each pixel point of the original image is in the second area, it may be determined that the exposure level of the original image is greater than a fifth set threshold, that is, overexposure, and the corresponding exception type is a third exception type.
And S380, determining a first gain and a second exposure time of the image acquisition device according to the pre-trained first exposure processing model.
And when the abnormal type of the original image is determined to be the second abnormal type, indicating that the original image is underexposed. In one embodiment, the gain and exposure time of the image capture device may be adjusted.
In one embodiment, the original image may be input into a first exposure processing model trained in advance, and a gain and an exposure time corresponding to the type of the anomaly may be output by the first exposure processing model.
The structure of the first exposure processing model can be set according to actual needs, and for example, a deep convolutional neural network model can be adopted. Prior to application, a number of underexposed images may be selected for training the first exposure process model.
And S390, adjusting the image acquisition device according to the first gain and the second exposure time, so that the image acquisition device acquires an image according to the first gain and the second exposure time to obtain a target image.
After the first gain and the second exposure time are determined, the image acquisition device can be adjusted according to the first gain and the second exposure time, so that the image acquisition device acquires images according to the first gain and the second exposure time, and high-quality images can be acquired from hardware.
And S3100, determining a second gain and a third exposure time of the image acquisition device according to a pre-trained second exposure processing model.
When the anomaly type of the original image is determined to be a third anomaly type, i.e., the original image is overexposed, in one embodiment, the gain and exposure time of the image capture device may be adjusted.
In one embodiment, the original image may be input into a second exposure processing model trained in advance, and the second exposure processing model outputs a gain and an exposure time corresponding to the type of the anomaly, which are respectively recorded as a second gain and a third exposure time in the embodiment of the present invention.
The structure of the second exposure processing model can be set according to actual needs, and for example, a deep convolutional neural network model can be adopted. The second exposure process model may be trained with a number of overexposed images prior to application.
The second exposure processing model and the first exposure processing model may have the same or different structures, and for simplification of the operation, models having the same structure may be used, that is, an underexposed or overexposed image may be processed using the same model.
S3110, adjusting the image acquisition device according to the second gain and the third exposure time, and enabling the image acquisition device to acquire an image according to the second gain and the third exposure time to obtain a target image.
After the second gain and the third exposure time are determined, the image acquisition device can be adjusted according to the second gain and the third exposure time, so that the image acquisition device acquires images according to the second gain and the third exposure time, and high-quality images can be acquired from hardware.
And S3120, carrying out visual positioning according to the target image to obtain a positioning result.
S3130, performing visual positioning according to the original image to obtain a positioning result.
When the pixel points of the original image are not in the first area or the second area, the pixel points can be considered to be distributed in the third area, namely, the illumination is normal, and therefore, the visual positioning can be directly carried out according to the original image.
Therefore, when the original image is in abnormal illumination, such as underexposure or overexposure, the gain and the exposure time of the image acquisition device under different abnormal types can be determined, so that the image quality is improved in hardware, and the accuracy of visual positioning is improved.
In one embodiment, when the illumination of the original image is abnormal, the quality of the original image can be improved by adjusting the contrast of the image.
It should be understood that abnormal weather may also affect the quality of the image, such as rainy, foggy, snowy, or hail days, which may add much noise to the image. Based on this, in one embodiment, determining the anomaly type of the original image according to the image information of the original image may further include:
determining a model according to pre-trained weather features, and determining the weather features in the original image;
and when the weather feature is the preset weather feature, determining that the abnormal type of the original image is a fourth abnormal type, wherein the fourth abnormal type is used for indicating that the visibility of the weather corresponding to the preset weather feature is smaller than the preset visibility.
The weather feature determination model may be a model for determining weather features, for example, a deep learning model may be employed. Before application, the marked image containing weather such as rainy days, foggy days, haze days, sunny days or hail days can be used for training the weather characteristic determination model until the obtained weather characteristic determination model has a good convergence effect.
The preset weather feature may be a weather feature corresponding to abnormal weather, and the abnormal weather may be weather having visibility smaller than the preset visibility, such as rainy days, foggy days, haze days, hail days, or the like. The preset weather feature of the embodiment of the invention may be one type, that is, the weather feature corresponding to the original image is one type.
When the weather feature in the original image is the preset weather feature, it may be determined that the weather corresponding to the original image is abnormal, and the corresponding abnormal type is a fourth abnormal type.
In one embodiment, when the anomaly type of the original image is determined to be the fourth anomaly type, the original image may be processed to reduce the influence of the anomalous weather on the original image and improve the image quality.
For example, when the preset weather feature is a weather feature corresponding to a foggy day, the defogging processing may be performed on the original image by using a dark channel defogging algorithm, or the defogging processing may be performed on the original image by using a convolutional neural network DehazeNet. For another example, when the preset weather feature is a weather feature corresponding to rainy days, the rain removing processing may be performed on the original image in a manner of sparse coding dictionary learning and classifier.
Therefore, the embodiment of the invention can detect the original image under the abnormal condition, determine the abnormal type and execute corresponding operation aiming at different abnormal types, thereby improving the image quality under the abnormal condition and the accuracy of visual positioning.
Based on the same inventive concept, the embodiment of the invention also provides an image acquisition device, which can be arranged in electronic equipment with an automatic driving function. The image capturing apparatus according to an embodiment of the present invention will be described in detail with reference to fig. 4.
Fig. 4 is a structural diagram of an image capturing apparatus according to an embodiment of the present invention.
As shown in fig. 4, the image pickup device may include:
the first obtaining module 41 is configured to obtain an original image collected by an image collecting device and an acceleration of an inertial sensor in a preset direction, where the image collecting device and the inertial sensor are located on the same electronic device;
an anomaly type determining module 42, configured to determine an anomaly type of the original image according to image information of the original image or an acceleration of the inertial sensor in a preset direction;
and a second obtaining module 43, configured to perform an operation corresponding to the abnormal type to obtain the target image.
The following describes the above-mentioned image capturing apparatus in detail, specifically as follows:
in one embodiment, the anomaly type determining module 42 is specifically configured to:
determining the change rate of the acceleration within a set time;
and when the change rate is greater than or equal to a first set threshold value, determining the abnormal type of the original image as a first abnormal type, wherein the first abnormal type is used for indicating that the definition of the original image is less than a second set threshold value.
In one embodiment, the anomaly type determining module 42 is specifically configured to:
generating an image histogram according to the original image, wherein the image histogram comprises the intensity value of each pixel point in the original image;
determining the distribution area of each pixel point according to the intensity value of each pixel point;
when the pixel points are distributed in the first area, determining that the abnormal type of the original image is a second abnormal type, wherein the first area is an area of which the intensity value is smaller than a third set threshold value, and the second abnormal type is used for indicating that the exposure degree of the original image is smaller than a fourth set threshold value;
and when the pixel points are distributed in the second area, determining that the abnormal type of the original image is a third abnormal type, wherein the second area is an area with the intensity value larger than a fifth set threshold, and the third abnormal type is used for indicating that the exposure degree of the original image is larger than a sixth set threshold.
In one embodiment, the anomaly type determining module 42 is specifically configured to:
determining a model according to pre-trained weather features, and determining the weather features in the original image;
and when the weather feature is the preset weather feature, determining that the abnormal type of the original image is a fourth abnormal type, wherein the fourth abnormal type is used for indicating that the visibility of the weather corresponding to the preset weather feature is smaller than the preset visibility.
In an embodiment, the second obtaining module 43 is specifically configured to:
when the abnormal type is a first abnormal type, determining first exposure time of the image acquisition device according to the change rate of the acceleration within the set time;
and adjusting the image acquisition device according to the first exposure time, so that the image acquisition device acquires an image according to the first exposure time to obtain a target image.
In an embodiment, the second obtaining module 43 is specifically configured to:
when the abnormal type is a second abnormal type, determining a first gain and a second exposure time of the image acquisition device according to a pre-trained first exposure processing model; adjusting the image acquisition device according to the first gain and the second exposure time, so that the image acquisition device acquires an image according to the first gain and the second exposure time to obtain a target image;
when the abnormal type is a third abnormal type, determining a second gain and a third exposure time of the image acquisition device according to a pre-trained second exposure processing model; and adjusting the image acquisition device according to the second gain and the third exposure time, so that the image acquisition device acquires the image according to the second gain and the third exposure time to obtain the target image.
In one embodiment, the apparatus may further comprise:
and the positioning module is used for performing visual positioning according to the target image after the operation corresponding to the abnormal type is executed to obtain the target image, so as to obtain a positioning result.
Each module in the apparatus shown in fig. 4 has a function of implementing each step in fig. 1-3 and can achieve a corresponding technical effect, and for brevity, is not described again here.
Based on the same inventive concept, an embodiment of the present invention further provides an electronic device, which has an automatic driving function, and the electronic device provided in the embodiment of the present invention is described in detail below with reference to fig. 5.
As shown in fig. 5, the electronic device may comprise an image acquisition arrangement 51 for acquiring images, an inertial sensor 52 for measuring acceleration, a processor 53 and a memory 54 for storing computer program instructions.
The processor 53 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
Memory 54 may include mass storage for data or instructions. By way of example, and not limitation, memory 54 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. In one example, memory 54 may include removable or non-removable (or fixed) media, or memory 54 is non-volatile solid-state memory. In one example, the Memory 54 may be a Read Only Memory (ROM). In one example, the ROM may be mask programmed ROM, programmable ROM (prom), erasable prom (eprom), electrically erasable prom (eeprom), electrically rewritable ROM (earom), or flash memory, or a combination of two or more of these.
The processor 53 reads and executes the computer program instructions stored in the memory 54 to implement the method in the embodiment shown in fig. 1 to 3, and achieve the corresponding technical effect achieved by the embodiment shown in fig. 1 to 3 executing the method, which is not described herein again for brevity.
In one example, the electronic device may also include a communication interface 55 and a bus 56. As shown in fig. 5, the image capturing device 51, the inertial sensor 52, the processor 53, the memory 54, and the communication interface 55 are connected via a bus 56 to complete mutual communication.
The communication interface 55 is mainly used for implementing communication between modules, apparatuses and/or devices in the embodiment of the present invention.
The bus 56 includes hardware, software, or both to couple the various components of the electronic device to one another. By way of example, and not limitation, a Bus may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (Front Side Bus, FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) Bus, an infiniband interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a video electronics standards association local (VLB) Bus, or other suitable Bus or a combination of two or more of these. Bus 56 may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
The electronic device may execute the image acquisition method in the embodiment of the present invention based on the currently acquired original image and the acceleration of the inertial sensor in the preset direction, thereby implementing the image acquisition method described in conjunction with fig. 1 to 3 and the image acquisition apparatus described in fig. 4.
In addition, in combination with the image acquisition method in the above embodiments, the embodiments of the present invention may be implemented by providing a computer storage medium. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the image acquisition methods of the above embodiments.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic Circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of embodiments of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (10)

1. An image acquisition method, comprising:
acquiring an original image acquired by an image acquisition device and the acceleration of an inertial sensor in a preset direction, wherein the image acquisition device and the inertial sensor are positioned on the same electronic equipment;
determining the abnormal type of the original image according to the image information of the original image or the acceleration of the inertial sensor in a preset direction;
and executing the operation corresponding to the abnormal type to obtain a target image.
2. The method of claim 1, wherein determining the anomaly type of the raw image based on the acceleration of the inertial sensor in a preset direction comprises:
determining the change rate of the acceleration within a set time;
when the change rate is larger than or equal to a first set threshold, determining that the abnormal type of the original image is a first abnormal type, wherein the first abnormal type is used for indicating that the definition of the original image is smaller than a second set threshold.
3. The method of claim 1, wherein determining the anomaly type of the original image from the image information of the original image comprises:
generating an image histogram according to the original image, wherein the image histogram comprises the intensity value of each pixel point in the original image;
determining a distribution area of each pixel point according to the intensity value of each pixel point;
when the pixel points are distributed in a first area, determining that the abnormal type of the original image is a second abnormal type, wherein the first area is an area of which the intensity value is smaller than a third set threshold value, and the second abnormal type is used for indicating that the exposure level of the original image is smaller than a fourth set threshold value;
when the pixel points are distributed in a second area, determining that the abnormal type of the original image is a third abnormal type, wherein the second area is an area with an intensity value larger than a fifth set threshold, and the third abnormal type is used for indicating that the exposure level of the original image is larger than a sixth set threshold.
4. The method of claim 1, wherein determining the anomaly type of the original image from the image information of the original image comprises:
determining a model according to pre-trained weather features, and determining the weather features in the original image;
when the weather feature is a preset weather feature, determining that the abnormal type of the original image is a fourth abnormal type, wherein the fourth abnormal type is used for indicating that the visibility of weather corresponding to the preset weather feature is smaller than a preset visibility.
5. The method of claim 2, wherein said performing the operation corresponding to the anomaly type to obtain the target image comprises:
when the abnormal type is a first abnormal type, determining first exposure time of the image acquisition device according to the change rate of the acceleration within set time;
and adjusting the image acquisition device according to the first exposure time, so that the image acquisition device acquires an image according to the first exposure time to obtain a target image.
6. The method of claim 3, wherein said performing the operation corresponding to the anomaly type to obtain the target image comprises:
when the abnormal type is a second abnormal type, determining a first gain and a second exposure time of the image acquisition device according to a pre-trained first exposure processing model; adjusting the image acquisition device according to the first gain and the second exposure time, so that the image acquisition device acquires an image according to the first gain and the second exposure time to obtain a target image;
when the abnormal type is a third abnormal type, determining a second gain and a third exposure time of the image acquisition device according to a pre-trained second exposure processing model; and adjusting the image acquisition device according to the second gain and the third exposure time, so that the image acquisition device acquires an image according to the second gain and the third exposure time to obtain a target image.
7. The method according to any one of claims 1 to 6, wherein after performing the operation corresponding to the anomaly type to obtain the target image, further comprising:
and carrying out visual positioning according to the target image to obtain a positioning result.
8. An image acquisition apparatus, characterized by comprising:
the first acquisition module is used for acquiring an original image acquired by an image acquisition device and the acceleration of an inertial sensor in a preset direction, and the image acquisition device and the inertial sensor are positioned on the same electronic equipment;
the abnormal type determining module is used for determining the abnormal type of the original image according to the image information of the original image or the acceleration of the inertial sensor in the preset direction;
and the second acquisition module is used for executing the operation corresponding to the abnormal type to obtain the target image.
9. An electronic device, comprising:
the image acquisition device is used for acquiring images;
an inertial sensor for measuring acceleration;
a processor;
a memory for storing computer program instructions;
the computer program instructions, when executed by the processor, implement the method of any of claims 1-7.
10. A computer-readable storage medium having computer program instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-7.
CN202110735463.2A 2021-06-30 2021-06-30 Image acquisition method, device, equipment and storage medium Pending CN113676624A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110735463.2A CN113676624A (en) 2021-06-30 2021-06-30 Image acquisition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110735463.2A CN113676624A (en) 2021-06-30 2021-06-30 Image acquisition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113676624A true CN113676624A (en) 2021-11-19

Family

ID=78538395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110735463.2A Pending CN113676624A (en) 2021-06-30 2021-06-30 Image acquisition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113676624A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1464479A (en) * 2002-06-20 2003-12-31 成都威斯达芯片有限责任公司 Programmable self-adapting image quality non-linear enhancement processing process
CN1972457A (en) * 2005-11-24 2007-05-30 株式会社日立制作所 Video processing apparatus and portable mobile terminal
CN101064783A (en) * 2006-04-30 2007-10-31 华为技术有限公司 Method for obtaining automatic exposure control parameter and control method and image forming apparatus
CN101882305A (en) * 2010-06-30 2010-11-10 中山大学 Method for enhancing image
CN101897192A (en) * 2007-12-11 2010-11-24 奥林巴斯株式会社 White balance adjustment device and white balance adjustment method
CN102317857A (en) * 2009-02-12 2012-01-11 佳能株式会社 Image pickup device and control method thereof
CN105929850A (en) * 2016-05-18 2016-09-07 中国计量大学 Unmanned plane system and method with capabilities of continuous locking and target tracking
CN107194900A (en) * 2017-07-27 2017-09-22 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107209931A (en) * 2015-05-22 2017-09-26 华为技术有限公司 Color correction device and method
CN108259736A (en) * 2016-12-29 2018-07-06 昊翔电能运动科技(昆山)有限公司 Holder stability augmentation system and holder increase steady method
CN111294488A (en) * 2018-12-06 2020-06-16 佳能株式会社 Image pickup apparatus, control method thereof, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1464479A (en) * 2002-06-20 2003-12-31 成都威斯达芯片有限责任公司 Programmable self-adapting image quality non-linear enhancement processing process
CN1972457A (en) * 2005-11-24 2007-05-30 株式会社日立制作所 Video processing apparatus and portable mobile terminal
CN101064783A (en) * 2006-04-30 2007-10-31 华为技术有限公司 Method for obtaining automatic exposure control parameter and control method and image forming apparatus
CN101897192A (en) * 2007-12-11 2010-11-24 奥林巴斯株式会社 White balance adjustment device and white balance adjustment method
CN102317857A (en) * 2009-02-12 2012-01-11 佳能株式会社 Image pickup device and control method thereof
CN101882305A (en) * 2010-06-30 2010-11-10 中山大学 Method for enhancing image
CN107209931A (en) * 2015-05-22 2017-09-26 华为技术有限公司 Color correction device and method
CN105929850A (en) * 2016-05-18 2016-09-07 中国计量大学 Unmanned plane system and method with capabilities of continuous locking and target tracking
CN108259736A (en) * 2016-12-29 2018-07-06 昊翔电能运动科技(昆山)有限公司 Holder stability augmentation system and holder increase steady method
CN107194900A (en) * 2017-07-27 2017-09-22 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN111294488A (en) * 2018-12-06 2020-06-16 佳能株式会社 Image pickup apparatus, control method thereof, and storage medium

Similar Documents

Publication Publication Date Title
CN108921161B (en) Model training method and device, electronic equipment and computer readable storage medium
US10819919B2 (en) Shutterless far infrared (FIR) camera for automotive safety and driving systems
CN110232359B (en) Retentate detection method, device, equipment and computer storage medium
EP3798975A1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN112541396A (en) Lane line detection method, device, equipment and computer storage medium
CN110135302B (en) Method, device, equipment and storage medium for training lane line recognition model
CN113194359B (en) Method, device, equipment and medium for automatically grabbing baby wonderful video highlights
US10511793B2 (en) Techniques for correcting fixed pattern noise in shutterless FIR cameras
CN110599516A (en) Moving target detection method and device, storage medium and terminal equipment
CN111798414A (en) Method, device and equipment for determining definition of microscopic image and storage medium
CN113723216A (en) Lane line detection method and device, vehicle and storage medium
CN110378934B (en) Subject detection method, apparatus, electronic device, and computer-readable storage medium
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
JP5246254B2 (en) Determining the exposure control value for in-vehicle cameras
CN113780492A (en) Two-dimensional code binarization method, device and equipment and readable storage medium
CN113676624A (en) Image acquisition method, device, equipment and storage medium
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
CN116543222A (en) Knee joint lesion detection method, device, equipment and computer readable storage medium
CN110189251B (en) Blurred image generation method and device
CN114037741B (en) Self-adaptive target detection method and device based on event camera
CN112949423B (en) Object recognition method, object recognition device and robot
Wischow et al. Monitoring and adapting the physical state of a camera for autonomous vehicles
CN116430069A (en) Machine vision fluid flow velocity measuring method, device, computer equipment and storage medium
CN113658052A (en) Image processing method, device, equipment and storage medium
CN111144312B (en) Image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination