CN112601007A - Image acquisition method and device for characteristic region - Google Patents

Image acquisition method and device for characteristic region Download PDF

Info

Publication number
CN112601007A
CN112601007A CN202011250798.7A CN202011250798A CN112601007A CN 112601007 A CN112601007 A CN 112601007A CN 202011250798 A CN202011250798 A CN 202011250798A CN 112601007 A CN112601007 A CN 112601007A
Authority
CN
China
Prior art keywords
image
image acquisition
feature
characteristic
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011250798.7A
Other languages
Chinese (zh)
Other versions
CN112601007B (en
Inventor
郭伟
朱麟
李�浩
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202011250798.7A priority Critical patent/CN112601007B/en
Publication of CN112601007A publication Critical patent/CN112601007A/en
Application granted granted Critical
Publication of CN112601007B publication Critical patent/CN112601007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image acquisition method and equipment for a characteristic region, wherein the method comprises the following steps: acquiring a global image of a specified object through a first image acquisition device, and performing first feature recognition on the global image to determine a feature area according to the global image; determining the characteristic position of the designated object according to the position information of the characteristic area; and acquiring a region image of the feature position through a second image acquisition device, wherein the region image is used for performing second feature identification to obtain a feature identification result corresponding to the specified object.

Description

Image acquisition method and device for characteristic region
Technical Field
The invention relates to the technical field of intelligent manufacturing, in particular to an image acquisition method and device for a characteristic region.
Background
In vision-based industrial detection, image acquisition of an object to be detected is an important step, but high-definition images of the object region are acquired, which generally need to use the following steps: expensive shooting equipment such as a high-definition camera, a high-speed camera, a 3D camera and the like also needs to use more complicated illumination equipment to illuminate a target area so as to acquire a high-definition image of the target area, which is high in cost.
Disclosure of Invention
The embodiment of the invention provides an image acquisition method and equipment for a characteristic region, which have the effect of acquiring a high-definition image at low cost.
An embodiment of the present invention provides an image acquisition method for a feature region, where the method includes: acquiring a global image of a specified object through a first image acquisition device, and performing first feature recognition on the global image to determine a feature area according to the global image; determining the characteristic position of the designated object according to the position information of the characteristic area; and acquiring a region image of the feature position through a second image acquisition device, wherein the region image is used for carrying out second feature recognition so as to obtain a feature recognition result corresponding to the specified object.
In an embodiment, acquiring the area image of the feature position by the second image acquisition device includes: acquiring template features corresponding to the feature areas, and determining a corresponding first image acquisition mode according to the template features; and controlling a second image acquisition device to acquire images of the characteristic positions according to the first image acquisition mode, and determining the area images.
In an embodiment, the controlling the second image capturing device to capture the image of the feature position according to the first image capturing mode and determining the area image includes: controlling a second image acquisition device to acquire images of the characteristic positions according to the first image acquisition mode to obtain a first acquired image; performing integration analysis on the first acquired image to determine a fuzzy area in the first acquired image; adjusting parameters of the first image acquisition mode according to the fuzzy characteristics of the fuzzy area to obtain a second image acquisition mode; and controlling the second image acquisition device to acquire images of fuzzy positions corresponding to the fuzzy areas according to the second image acquisition mode, and determining area images.
In an embodiment, controlling the second image capturing device to perform image capturing on a blurred position corresponding to the blurred region according to the second image capturing mode, and determining the region image includes: controlling the second image acquisition device to move to a fuzzy position corresponding to the fuzzy area according to the position information of the fuzzy area; controlling the second image acquisition device to acquire the image of the fuzzy position according to the second image acquisition mode to obtain a second acquired image; and integrating and splicing the first collected image and the second collected image to obtain an area image.
In an embodiment, the first and second image acquisition modes each include at least one of the following parameters: the device comprises first parameter information used for representing the type of a light source, second parameter information used for representing the illumination angle, third parameter information used for representing the shooting angle, fourth parameter information used for representing the depth-of-field mode and fifth parameter information used for representing the number of times of shooting.
In an embodiment, acquiring the area image of the feature position by the second image acquisition device includes: acquiring a first current position corresponding to the second image acquisition device, and acquiring a second current position and a preset moving path corresponding to the characteristic position according to the position information of the characteristic area; performing path planning based on the first current position, the second current position and a preset moving path, controlling the second image acquisition device to move to a characteristic position according to a path planning result, and performing characteristic tracking on the characteristic position; and under the condition that the second image acquisition device performs feature tracking on the feature position, acquiring an image of the feature position.
In an embodiment, the second image capturing device includes a first type device and a second type device, the first type device is disposed on the drone, and the second type device is disposed on the preset track; correspondingly, the controlling the second image acquisition device to move to the feature position according to the path planning result includes: judging whether the path planning result is superposed with a preset track or not; when the path planning result is judged to be not coincident with the preset track, path planning information is sent to the unmanned aerial vehicle, and the path planning information carries characteristic position information; controlling, by the drone, the first type device to move to a feature location.
Another aspect of an embodiment of the present invention provides an image capturing apparatus for a feature region, where the apparatus includes: the system comprises a global acquisition module, a first image acquisition device and a second image acquisition device, wherein the global acquisition module is used for acquiring a global image of a specified object through the first image acquisition device, and performing first feature recognition on the global image so as to determine a feature region according to the global image; the determining module is used for determining the characteristic position of the specified object according to the position information of the characteristic area; and the area acquisition module is used for acquiring an area image of the characteristic position through a second image acquisition device, and the area image is used for carrying out second characteristic identification so as to obtain a characteristic identification result corresponding to the specified object.
In an embodiment, the area acquisition module includes: the acquisition submodule is used for acquiring template features corresponding to the feature areas and determining a corresponding first image acquisition mode according to the template features; and the acquisition submodule is used for controlling a second image acquisition device to acquire images of the characteristic positions according to the first image acquisition mode to obtain regional images.
In an embodiment, the acquisition sub-module includes: the acquisition unit is used for controlling a second image acquisition device to acquire images of the characteristic positions according to the first image acquisition mode to obtain a first acquired image; the analysis unit is used for performing integrated analysis on the first acquired image and determining a fuzzy area in the first acquired image; the adjusting unit is used for adjusting parameters of the first image acquisition mode according to the fuzzy characteristics of the fuzzy area to obtain a second image acquisition mode; the acquisition unit is further configured to control the second image acquisition device to perform image acquisition on a blurred position corresponding to the blurred region according to the second image acquisition mode, and determine a region image.
In an embodiment, the acquisition unit includes: controlling the second image acquisition device to move to a fuzzy position corresponding to the fuzzy area according to the position information of the fuzzy area; controlling the second image acquisition device to acquire the image of the fuzzy position according to the second image acquisition mode to obtain a second acquired image; integrating and splicing the first collected image and the second collected image to obtain an area image; the first image acquisition mode and the second image acquisition mode respectively comprise at least one of the following parameters: the device comprises first parameter information used for representing the type of a light source, second parameter information used for representing the illumination angle, third parameter information used for representing the shooting angle, fourth parameter information used for representing the depth-of-field mode and fifth parameter information used for representing the number of times of shooting.
In an embodiment, the area acquisition module includes: the obtaining sub-module is further configured to obtain a first current position corresponding to the second image collecting device, and obtain a second current position and a preset moving path corresponding to the feature position according to the position information of the feature region; the tracking sub-module is used for planning a path based on the first current position, the second current position and a preset moving path, controlling the second image acquisition device to move to a characteristic position according to a path planning result and carrying out characteristic tracking on the characteristic position; the acquisition submodule is further configured to acquire an image of the feature position when the second image acquisition device performs feature tracking on the feature position.
In an embodiment, the second image capturing device includes a first type device and a second type device, the first type device is disposed on the drone, and the second type device is disposed on the preset track; accordingly, the tracking sub-module includes: the judging unit is used for judging whether the path planning result is superposed with a preset track or not; a sending unit, configured to send path planning information to the unmanned aerial vehicle when it is determined that the path planning result does not coincide with the preset track, where the path planning information carries characteristic position information; and the moving unit is used for controlling the first type device to move to the characteristic position through the unmanned aerial vehicle.
According to the image acquisition method and the image acquisition equipment for the characteristic region, provided by the embodiment of the invention, the designated object is acquired according to the requirement to obtain the global image, the characteristic position needing to be acquired again is determined according to the global image, and then the characteristic position is acquired again, so that the regional image of the characteristic position can be obtained.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 is a schematic flow chart illustrating an implementation of an image acquisition method for a feature area according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an implementation of region acquisition by an image acquisition method for a feature region according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of an implementation of secondary acquisition of a region image by an image acquisition method of a feature region according to an embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating an implementation of region acquisition by an image acquisition method for a feature region according to another embodiment of the present invention;
fig. 5 is a schematic diagram of an implementation module of an image capturing device of a feature area according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart illustrating an implementation of an image acquisition method for a feature region according to an embodiment of the present invention.
Referring to fig. 1, in one aspect, an embodiment of the present invention provides an image acquisition method for a feature region, where the method includes: an operation 101, acquiring a global image of a designated object through a first image acquisition device, and performing first feature recognition on the global image to determine a feature region according to the global image; operation 102, determining a characteristic position of the designated object according to the position information of the characteristic region; in operation 103, a region image of the feature position is acquired by the second image acquisition device, and the region image is used for performing second feature recognition to obtain a feature recognition result corresponding to the specified object.
The image acquisition method provided by the embodiment of the invention is mainly applied to the technical field of intelligent manufacturing, is particularly suitable for intelligent factories, can be applied to image acquisition of manufactured products, can also be applied to image acquisition of manufacturing environments, can also be applied to image acquisition of production equipment and production lines, and can be used for acquisition of all operations needing to take pictures to obtain acquired images.
The method comprises the steps of firstly collecting a global image of a specified object, then determining a feature position to be collected again according to the global image, and then collecting the feature position again, so that a regional image of the feature position can be obtained.
The method acquires a global image of a designated object through a first image acquisition device in operation 101. The first image acquisition device may be a common camera, such as a 2D camera. The designated object can be a target product, a device for production, a production line for production, a factory environment in the production process and the like according to production needs, namely, the designated object can be mobile or fixed. According to the specific selection of the designated object, the first image capturing device may be disposed at a corresponding position in a fixed posture or a moving posture, for example, in the case where the designated object is a target product, the first image capturing device may be fixed on a conveyor belt for conveying the target product; when the designated object is a production line for production, the first image acquisition device can be fixed on the top of a factory building and can shoot the position of the overall structure of the production line; in the case where the specified object is a certain production apparatus, the first image pickup device may be fixed to a side of the apparatus. It is understood that, in the case of the first image capturing device in the moving posture, the panoramic image of the specified object may be captured, for example, in the case of the specified image being the production equipment, the first image capturing device may make one turn around the production equipment to capture a global image capable of representing the panoramic image of the production equipment. And when the designated object is in other situations, the process is analogized, and the description is not repeated below.
Then, the global image is subjected to first feature recognition, and the first feature recognition on the global image may be recognition of a blurred region, recognition of a defective region, recognition of a pre-marked region, or recognition of other purposes such as character recognition and image recognition, depending on the purpose of capturing the global image. I.e. the features of the method refer to features related to the purpose of the shot. The method can adopt methods such as template matching, tracking algorithm, deep learning detection and the like to carry out first feature identification. Through the first feature recognition, a feature region can be determined from the global image, and the feature region refers to a region having correlation with the shooting purpose, namely, a region which needs to be subjected to image acquisition again for realizing the shooting purpose.
In operation 102 of the method, according to the position information of the feature region in the picture, the position information is mapped to a position on the designated object corresponding to the feature region, that is, the feature position. For example, when the designated object is a production pipeline, the shooting purpose is to obtain a high-definition image of the whole production pipeline, a global image of the production pipeline is collected through a first image collection device, then feature analysis is performed on the global image, a fuzzy area which does not reach a high-definition standard in the global image is determined, then the fuzzy area is mapped onto the production pipeline according to position information of the fuzzy area in the global image, and a position corresponding to the fuzzy area on the production pipeline is determined, wherein the position is a feature position. The mapping may also be implemented by template mapping or other location determination means.
In the method operation 103, after the feature position is determined, a region image of the feature position is acquired by the second image acquisition device. The second image capturing device is a camera for capturing images of the characteristic position, and the second image capturing device can be installed in a fixed posture or a movable posture. Because the second image acquisition device is a camera for shooting the characteristic position, the distance from the characteristic position is closer to the characteristic position than the first image acquisition device when shooting, and therefore the second image acquisition device can shoot the characteristic position in a short distance to obtain the regional image with the definition meeting the requirement. Specifically, the number of the second image capturing devices may be one or more, and when the second image capturing device is movable, the second image capturing device may perform path planning according to the feature position, so as to approach and track the feature position. When the second image capturing device is fixed and the designated object is fixed, a plurality of second image capturing devices may be disposed around the designated object, and one or more second image capturing devices closest to the characteristic position may be selected to capture the image of the characteristic position. When the second image acquisition device is fixed and the designated object is movable, the second image acquisition device can be arranged on a necessary path of the designated object, and when the designated object is determined to pass through the second image acquisition device, the second image acquisition device is controlled to acquire images of the characteristic position of the designated object.
Similarly, the second feature recognition of the area image, the specific type of the feature recognition is determined according to the shooting purpose of the area image, and the specific type of the feature recognition may be recognition of a fuzzy area, recognition of a defect area, recognition of a pre-identified area, recognition of other purposes such as character recognition and image recognition, or joint feature recognition combining a global image and the area image, and the feature of the method refers to a feature related to the shooting purpose of the area image. The method can adopt methods such as template matching, tracking algorithm, deep learning detection and the like to carry out second feature recognition. By the second feature recognition, a feature recognition result corresponding to the specified object can be obtained from the region image. The feature recognition result may be a result that can be obtained by obtaining a high-definition image integrating the global image and the area image, determining the type of defect in the area image, evaluating the quality of a workpiece in the area image, and the like by processing and analyzing the area image.
Fig. 2 is a schematic flow chart illustrating an implementation of region acquisition by an image acquisition method for a feature region according to an embodiment of the present invention.
Referring to fig. 2, in the embodiment of the present invention, in operation 103, acquiring a region image of the feature position by a second image acquisition device includes: operation 1031, obtaining template features corresponding to the feature regions, and determining corresponding first image acquisition modes according to the template features; operation 1032 includes controlling the second image capturing device to perform image capturing on the feature location according to the first image capturing mode, and determining the region image.
In the method, when the characteristic positions are different, the angle requirement, the depth of field requirement, the illumination requirement and the like required by the second image acquisition device during acquisition are different. Based on the method, different image acquisition modes of corresponding different characteristic positions can be preset, and which image acquisition mode is adopted by the characteristic positions is determined according to the characteristic areas, so that the acquisition mode of the second image acquisition device on the characteristic positions can meet the shooting purpose.
Specifically, in operation 1031, a corresponding template feature is determined according to the feature region, where the template feature is a predetermined feature condition, so that the first image capturing mode may be corresponding to the template feature. Specifically, when the method is applied, a template image corresponding to a designated object can be determined according to the designated object acquired by the global image, the template image can be segmented according to division standards such as structures, functions, positions of parts and the like to form a plurality of template features, then, an image acquisition mode corresponding to each template feature is preset according to each template feature, for example, when the template features are used for representing a protruding piece, a shooting mode of one circle around the protruding piece can be adopted, and when the template features are used for representing an inclined plane, a shooting mode of one circle at the positions of the front side, the side and the like of the inclined plane can be adopted; when the template features characterize an in-hole structure, a shooting mode of polishing and then shooting the in-hole structure can be adopted. And will not be described in detail below. The first image acquisition mode is realized according to the template characteristics.
In the method operation 1032, the second image capturing device may be controlled to perform image capturing on the feature position according to the first image capturing mode to determine the region image. The area image may be one or more, may be a single angle or multiple angles, may be a single-level image or a multi-level image, and the like.
Fig. 3 is a schematic flow chart illustrating an implementation process of secondary acquisition of a region image by an image acquisition method for a feature region according to an embodiment of the present invention.
Referring to fig. 3, in an embodiment of the present invention, in operation 1032, the controlling the second image capturing device to perform image capturing on the feature position according to the first image capturing mode, and determining the area image includes: operation 10321, controlling a second image capturing device to capture an image of the feature location according to the first image capturing mode, and obtaining a first captured image; operation 10322, performing an integration analysis on the first captured image to determine a blurred region in the first captured image; operation 10323, performing parameter adjustment on the first image acquisition mode according to the blur characteristics of the blur area to obtain a second image acquisition mode; in operation 10324, a second image capturing device is controlled according to a second image capturing mode to capture an image of a blurred position corresponding to the blurred region, and a region image is determined.
In the process of acquiring the regional image, the method can further judge the acquired image so as to ensure that the definition of the finally obtained regional image can meet the requirement.
Specifically, in operation 10321, the method obtains a first captured image after controlling the second image capturing device to capture the image of the feature location according to the first image capturing mode. Similarly, the first-time collected image may be one or more images, may be a single-angle image or a multi-angle image, may be a single-level image or a multi-level image, and the like.
In method operation 10322, when there are multiple first captured images, analyzing the first captured images to determine a blurred region and a sharp region in each of the first captured images; the blurred region is used for representing a region of which the definition does not meet a preset standard in the first acquired image, and the sharp region is used for representing a region of which the definition meets the preset standard in the first acquired image. And then, determining the area which is judged as the fuzzy area in all the first collected images as the fuzzy area needing to be collected again through comparison of the clear area and the fuzzy area.
In the present method operation 10323, similarly to operation 102, the fuzzy feature of the designated object is determined according to the method of determining the feature location according to the feature region and the fuzzy region which is collected again as needed. And then, parameter adjustment is carried out on the first image acquisition mode according to the fuzzy characteristics to obtain a second image acquisition mode. Specifically, the blur feature may preset a second image capture mode different from the first image capture mode, so as to determine the second image capture mode according to the blur feature to perform image capture on the blur feature. The second image acquisition mode may also be obtained by determining a cause of blurring of the blurred features and then performing parameter adjustments on the first image acquisition mode based on the cause of blurring. For example, in the case where the blur cause is that the light is too weak, the parameter characterizing the light intensity in the first image capturing mode is increased. And when the blurring reason is shielded, adjusting parameters representing the shooting angle in the first image acquisition mode.
In the method operation 10324, a second image capturing device is controlled according to a second image capturing mode to capture an image of the blurred position, a second captured image is obtained, and then the first captured image and the second captured image are integrally analyzed, and when it is determined that the blurred region does not exist, the first captured image and the second captured image can be directly determined as the region image.
In this embodiment of the present invention, in operation 10324, controlling a second image capturing device to perform image capturing on a blurred position corresponding to a blurred region according to a second image capturing mode, and determining a region image, includes: firstly, controlling a second image acquisition device to move to a fuzzy position corresponding to a fuzzy area according to the position information of the fuzzy area; then, controlling a second image acquisition device to acquire an image of the fuzzy position according to a second image acquisition mode to obtain a second acquired image; and then, integrating and splicing the first collected image and the second collected image to obtain an area image.
In a concrete implementation scenario, the second image acquisition device can be arranged on a mobile device, such as an unmanned vehicle, an unmanned aerial vehicle, a mechanical arm, a mobile track, a mobile support, a robot and the like.
The method comprises the steps of determining a fuzzy position corresponding to a fuzzy area according to position information of the fuzzy area, then sending a moving instruction corresponding to the fuzzy position to a mobile device, planning a moving route by the mobile device to control a second image acquisition device to move to the fuzzy position under the condition that the mobile device has a data processing function, determining a moving path firstly under the condition that the mobile device does not have the data processing function, and then sending the moving instruction carrying the moving path to the mobile device to enable the mobile device to move to the fuzzy position according to the moving path. When the mobile device moves to the fuzzy position, the second image acquisition device can acquire the image of the fuzzy position to obtain a first acquired image. In the above embodiment, when the second image capturing device captures the area image, the second image capturing device can achieve the purpose of reaching the feature position in the moving manner used in the above specific implementation scenario.
And then, integrating and splicing the first collected image and the second collected image to obtain the regional images of which the definition of all regions meets the preset definition index. It is added that, when the shooting purpose is to obtain a whole high-definition global image, the global image and the region image can be spliced to obtain a whole high-definition target image.
In an embodiment of the invention, the first image acquisition mode and the second image acquisition mode each comprise at least one of the following parameters: the device comprises first parameter information used for representing the type of a light source, second parameter information used for representing the illumination angle, third parameter information used for representing the shooting angle, fourth parameter information used for representing the depth-of-field mode and fifth parameter information used for representing the number of times of shooting.
Specifically, the second image acquisition device of the method comprises a mobile device, a camera device and a light source device. The camera device and the light source device are arranged on the mobile device, so that the camera device and the light source device can move relative to the specified object through the movement of the mobile device. The camera device can further be a rotatable camera so as to further improve the shooting effect on the characteristic position and the fuzzy position. The light source device can be one or more, and the type of the light source device can be different, and a plurality of light source devices are matched to obtain the required parameter information related to the light. By controlling the parameters, richer target area gestures can be acquired, and more useful information is provided for subsequent processing. For example, multi-angle, multi-depth of field and multi-level images under different illumination conditions in a target area can be collected.
Several parameter information are selected for illustration below.
The first parameter information may be used to determine the light source type for illuminating the characteristic position or the fuzzy position, such as light sources with different colors, light sources with different illumination intensities, and light sources with different illumination ranges, and further, when the installation position is limited, the light source device may be an intelligent adjustable light source device, and the light source mode of the light source device is adjusted to obtain different light source types.
The second parameter information is used for representing the illumination angle, the adjustment of the illumination angle can be realized through the joint connection between the mobile device and the light source device through the angle adjustable, the light source with the adjustable angle can also be directly selected for use, and the angle adjustment can also be realized through the mobile device.
And the third parameter information is used for representing the shooting angle of the camera device. The shooting angle can be realized by adjusting the angle of the second image acquisition device, can also be realized by connecting the camera device and the light source device through the joint with the adjustable angle, and can also be realized by adjusting the angle through the mobile device.
And the fourth parameter information is used for representing the depth of field mode of the camera device. And the fifth parameter information is used for representing the photographing times of the camera device. The parameters can be controlled by carrying corresponding parameter information through the control instruction. It is understood that the method includes, but is not limited to, the above-mentioned parameter information according to the needs of the photographing purpose.
Fig. 4 is a schematic flow chart illustrating an implementation of region acquisition by an image acquisition method for a feature region according to another embodiment of the present invention.
Referring to fig. 4, in the embodiment of the present invention, in operation 103, acquiring a region image of the feature position by a second image acquisition device includes: operation 1033, acquiring a first current position corresponding to the second image acquisition device, and acquiring a second current position corresponding to the feature position and a preset moving path according to the position information of the feature region; an operation 1034 of planning a path based on the first current position, the second current position and the preset moving path, controlling the second image acquisition device to move to the feature position according to the path planning result, and performing feature tracking on the feature position; in operation 1035, in a case where the second image capturing device performs feature tracking on the feature position, image capturing is performed on the feature position.
It should be noted that operations 1031 to 1032, operations 1033 to 1035, and other operations mentioned in the present method are only distinguished for convenience of explanation, and there is no mandatory front-to-back order between the operations, that is, the present method may have one possible implementation of operations 1031 to 1032, another possible implementation of operations 1033 to 1035, and only operations 1031 to 1032 or operations 1033 to 1035 may be performed, or further combinations may be performed.
In the case where the second capture device of the present method is connected to a mobile device, the present method may determine the current location of the second image device by acquiring the first current location at operation 1033. And determining the current position of the characteristic position through the second current position, and determining the position which the characteristic position will pass through in the movement through a preset movement path.
In operation 1034, a path may be planned according to the movement pattern of the mobile device itself, the first current location, the second current location, and the preset movement path, so that the mobile device can move to the periphery of the feature location, and when the mobile device moves to the periphery of the feature location, the movement of the mobile device and the feature location may be kept consistent through a feature tracking algorithm, so that the mobile device and the feature location may be kept relatively still.
In operation 1035, in this case, the feature position is image-captured by the second image capture, and the obtained area image can obtain higher definition. In the production process, the image acquisition of the specified object and the characteristic position can be realized without stopping the production line, and the effect of not influencing the production is realized.
In the embodiment of the invention, the second image acquisition device comprises a first type device and a second type device, the first type device is arranged on the unmanned aerial vehicle, and the second type device is arranged on the preset track; correspondingly, the operation 1034, controlling the second image capturing device to move to the feature position according to the path planning result, includes: firstly, judging whether a path planning result is superposed with a preset track or not; then, when the path planning result is judged to be not coincident with the preset track, path planning information is sent to the unmanned aerial vehicle, and the path planning information carries characteristic position information; and then, controlling the first type device to move to the characteristic position through the unmanned aerial vehicle.
In an implementation scene, the mobile device that supplies the second image acquisition device to connect can be multiple, can be adapted to different characteristic position through different mobile device and supply the second image acquisition device to shoot, for example, when the position space that appointed object is located is comparatively narrow and small, or under the higher condition in appointed object position, can carry the second image acquisition device through unmanned aerial vehicle and shoot characteristic position, when appointed object is located the production track, can carry the second image acquisition device through predetermineeing the track and shoot characteristic position, can make the synchronism between second image acquisition device and the characteristic position better through the track. When the position of the designated object is complex, the second image acquisition device can be carried by the mechanical arm to shoot the characteristic position.
In a specific implementation scenario, when the mobile device has two types, the first type device is a camera device disposed on the unmanned aerial vehicle, and the second type device is a camera device disposed on a preset track.
In operation 1034, it may be first determined whether the path planning result coincides with the preset trajectory. And when the path planning result is judged to be coincident with the preset track, the camera device can be moved to the characteristic position through the preset track. And when the judgment result is that the path planning result is not coincident with the preset track, sending path planning information to the unmanned aerial vehicle, so that the unmanned aerial vehicle plans a moving path by itself to move to the characteristic position. Wherein, according to unmanned aerial vehicle's planning ability, path planning information can carry characteristic position information, and characteristic position information includes but not limited to following at least one of position: a current position of the feature position and a movement path of the feature position. Afterwards, the first type device is controlled by the unmanned aerial vehicle to move to the characteristic position so as to shoot the characteristic position.
An overall description is provided below of an implementation scenario in which a given object is a production product, such as an assembled cell phone. The first camera device is arranged at the front end of the production line, and the second camera device is arranged on the mechanical arm. The shooting purpose is to detect whether all parts on the mobile phone are installed. When the mobile phone is conveyed through the first camera device by the production line, the first camera device collects images of the mobile phone to obtain a global image of the mobile phone, then matches a part template with the global image to determine an area which is not clear in shooting, and determines shooting parameters for shooting the area according to the area which is not clear in shooting. And then carrying a second camera device by the mechanical arm to photograph the area according to the photographing parameters to obtain a plurality of first collected images, adjusting parameters to control the second camera device to perform complementary photographing on the fuzzy area under the condition that all the first collected images are analyzed and still have the fuzzy area, determining the first collected images and the second collected images as area images at least, and then performing overall analysis on the area images and the overall images to determine whether all parts on the mobile phone are installed.
Fig. 5 is a schematic diagram of an implementation module of an image capturing device of a feature area according to an embodiment of the present invention.
Referring to fig. 5, another aspect of the present invention provides an image capturing apparatus, including: the global acquisition module 501 is configured to acquire a global image of a designated object through a first image acquisition device, perform first feature recognition on the global image, and determine a feature area according to the global image; a determining module 502, configured to determine a feature position of the designated object according to the position information of the feature area; and the area acquisition module 503 acquires an area image of the feature position through the second image acquisition device, wherein the area image is used for performing second feature recognition to obtain a feature recognition result corresponding to the specified object.
In this embodiment of the present invention, the area collecting module 503 includes: the obtaining sub-module 5031 is configured to obtain a template feature corresponding to the feature region, and determine a corresponding first image acquisition mode according to the template feature; the acquisition sub-module 5032 is configured to control the second image acquisition device to perform image acquisition on the feature position according to the first image acquisition mode to obtain an area image.
In this embodiment of the present invention, the acquisition sub-module 5032 includes: an acquiring unit 50321, configured to control a second image acquiring device to acquire an image of the feature position according to the first image acquiring mode, so as to obtain a first acquired image; an analyzing unit 50322, configured to perform integration analysis on the first captured image to determine a blurred region in the first captured image; an adjusting unit 50323, configured to perform parameter adjustment on the first image acquisition mode according to the blur feature of the blur area to obtain a second image acquisition mode; the acquiring unit 50321 is further configured to control the second image acquiring device to acquire an image of the blurred position corresponding to the blurred region according to the second image acquiring mode, and determine the region image.
In an embodiment of the present invention, the acquiring unit 50321 includes: controlling the second image acquisition device to move to a fuzzy position corresponding to the fuzzy area according to the position information of the fuzzy area; controlling a second image acquisition device to acquire an image of the fuzzy position according to a second image acquisition mode to obtain a second acquired image; integrating and splicing the first collected image and the second collected image to obtain a regional image; the first image acquisition mode and the second image acquisition mode each comprise at least one of the following parameters: the device comprises first parameter information used for representing the type of a light source, second parameter information used for representing the illumination angle, third parameter information used for representing the shooting angle, fourth parameter information used for representing the depth-of-field mode and fifth parameter information used for representing the number of times of shooting.
In this embodiment of the present invention, the area collecting module 503 includes: the obtaining sub-module 5031 is further configured to obtain a first current position corresponding to the second image capturing device, and obtain a second current position and a preset moving path corresponding to the feature position according to the position information of the feature area; the tracking sub-module 5033 is configured to perform path planning based on the first current position, the second current position, and a preset moving path, control the second image acquisition device to move to the feature position according to a path planning result, and perform feature tracking on the feature position; the acquisition sub-module 5032 is further configured to perform image acquisition on the feature position when the feature position is feature tracked by the second image acquisition device.
In the embodiment of the invention, the second image acquisition device comprises a first type device and a second type device, the first type device is arranged on the unmanned aerial vehicle, and the second type device is arranged on the preset track; accordingly, tracking sub-module 5033 comprises: a determining unit 50331, configured to determine whether the path planning result coincides with the preset track; a sending unit 50332, configured to send path planning information to the unmanned aerial vehicle when it is determined that the path planning result does not coincide with the preset track, where the path planning information carries characteristic location information; a moving unit 50333 for controlling the first type device to move to the feature location by the drone.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for image acquisition of a feature region, the method comprising:
acquiring a global image of a specified object through a first image acquisition device, and performing first feature recognition on the global image to determine a feature area according to the global image;
determining the characteristic position of the designated object according to the position information of the characteristic area;
and acquiring a region image of the feature position through a second image acquisition device, wherein the region image is used for carrying out second feature recognition so as to obtain a feature recognition result corresponding to the specified object.
2. The method of claim 1, wherein acquiring the region image of the feature location by a second image acquisition device comprises:
acquiring template features corresponding to the feature areas, and determining a corresponding first image acquisition mode according to the template features;
and controlling a second image acquisition device to acquire images of the characteristic positions according to the first image acquisition mode, and determining the area images.
3. The method of claim 2, wherein controlling a second image acquisition device to perform image acquisition on the feature position according to the first image acquisition mode, and determining the area image comprises:
controlling a second image acquisition device to acquire images of the characteristic positions according to the first image acquisition mode to obtain a first acquired image;
performing integration analysis on the first acquired image to determine a fuzzy area in the first acquired image;
adjusting parameters of the first image acquisition mode according to the fuzzy characteristics of the fuzzy area to obtain a second image acquisition mode;
and controlling the second image acquisition device to acquire images of fuzzy positions corresponding to the fuzzy areas according to the second image acquisition mode, and determining area images.
4. The method according to claim 3, wherein controlling the second image capturing device to perform image capturing on the blurred position corresponding to the blurred region according to the second image capturing mode, and determining the region image comprises:
controlling the second image acquisition device to move to a fuzzy position corresponding to the fuzzy area according to the position information of the fuzzy area;
controlling the second image acquisition device to acquire the image of the fuzzy position according to the second image acquisition mode to obtain a second acquired image;
and integrating and splicing the first collected image and the second collected image to obtain an area image.
5. The method according to claim 3 or 4, wherein the first and second image acquisition modes each comprise at least one of the following parameters: the device comprises first parameter information used for representing the type of a light source, second parameter information used for representing the illumination angle, third parameter information used for representing the shooting angle, fourth parameter information used for representing the depth-of-field mode and fifth parameter information used for representing the number of times of shooting.
6. The method of claim 1, wherein said acquiring a region image of said feature location by a second image acquisition device comprises:
acquiring a first current position corresponding to the second image acquisition device, and acquiring a second current position and a preset moving path corresponding to the characteristic position according to the position information of the characteristic area;
performing path planning based on the first current position, the second current position and a preset moving path, controlling the second image acquisition device to move to a characteristic position according to a path planning result, and performing characteristic tracking on the characteristic position;
and under the condition that the second image acquisition device performs feature tracking on the feature position, acquiring an image of the feature position.
7. The method according to claim 6, characterized in that the second image acquisition device comprises a first type of device provided on the drone and a second type of device provided on a preset trajectory;
correspondingly, the controlling the second image acquisition device to move to the feature position according to the path planning result includes:
judging whether the path planning result is superposed with a preset track or not;
when the path planning result is judged to be not coincident with the preset track, path planning information is sent to the unmanned aerial vehicle, and the path planning information carries characteristic position information;
controlling, by the drone, the first type device to move to a feature location.
8. An image acquisition device of a characteristic region, characterized in that the device comprises:
the system comprises a global acquisition module, a first image acquisition device and a second image acquisition device, wherein the global acquisition module is used for acquiring a global image of a specified object through the first image acquisition device, and performing first feature recognition on the global image so as to determine a feature region according to the global image;
the determining module is used for determining the characteristic position of the specified object according to the position information of the characteristic area;
and the area acquisition module is used for acquiring an area image of the characteristic position through a second image acquisition device, and the area image is used for carrying out second characteristic identification so as to obtain a characteristic identification result corresponding to the specified object.
9. The apparatus of claim 8, wherein the region acquisition module comprises:
the acquisition submodule is used for acquiring template features corresponding to the feature areas and determining a corresponding first image acquisition mode according to the template features;
and the acquisition submodule is used for controlling a second image acquisition device to acquire images of the characteristic positions according to the first image acquisition mode to obtain regional images.
10. The apparatus of claim 8, wherein the acquisition sub-module comprises:
the acquisition unit is used for controlling a second image acquisition device to acquire images of the characteristic positions according to the first image acquisition mode to obtain a first acquired image;
the analysis unit is used for performing integrated analysis on the first acquired image and determining a fuzzy area in the first acquired image;
the adjusting unit is used for adjusting parameters of the first image acquisition mode according to the fuzzy characteristics of the fuzzy area to obtain a second image acquisition mode;
the acquisition unit is further configured to control the second image acquisition device to perform image acquisition on a blurred position corresponding to the blurred region according to the second image acquisition mode, and determine a region image.
CN202011250798.7A 2020-11-11 2020-11-11 Image acquisition method and device for characteristic region Active CN112601007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011250798.7A CN112601007B (en) 2020-11-11 2020-11-11 Image acquisition method and device for characteristic region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011250798.7A CN112601007B (en) 2020-11-11 2020-11-11 Image acquisition method and device for characteristic region

Publications (2)

Publication Number Publication Date
CN112601007A true CN112601007A (en) 2021-04-02
CN112601007B CN112601007B (en) 2022-06-28

Family

ID=75183304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011250798.7A Active CN112601007B (en) 2020-11-11 2020-11-11 Image acquisition method and device for characteristic region

Country Status (1)

Country Link
CN (1) CN112601007B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202023105500U1 (en) 2023-09-21 2023-10-17 Gesellschaft zur Förderung angewandter Informatik e. V. Device for detecting defects on a peripheral surface of a long metallic product passed through the device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107948505A (en) * 2017-11-14 2018-04-20 维沃移动通信有限公司 A kind of panorama shooting method and mobile terminal
CN108289169A (en) * 2018-01-09 2018-07-17 北京小米移动软件有限公司 Image pickup method, device, electronic equipment and storage medium
CN108347556A (en) * 2017-01-21 2018-07-31 盯盯拍(东莞)视觉设备有限公司 Panoramic picture image pickup method, panoramic image display method, panorama image shooting apparatus and panoramic image display device
CN108566513A (en) * 2018-03-28 2018-09-21 深圳臻迪信息技术有限公司 A kind of image pickup method of unmanned plane to moving target
CN109580645A (en) * 2018-12-20 2019-04-05 深圳灵图慧视科技有限公司 Defects identification equipment
US20190289207A1 (en) * 2018-03-16 2019-09-19 Arcsoft Corporation Limited Fast scan-type panoramic image synthesis method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108347556A (en) * 2017-01-21 2018-07-31 盯盯拍(东莞)视觉设备有限公司 Panoramic picture image pickup method, panoramic image display method, panorama image shooting apparatus and panoramic image display device
CN107948505A (en) * 2017-11-14 2018-04-20 维沃移动通信有限公司 A kind of panorama shooting method and mobile terminal
CN108289169A (en) * 2018-01-09 2018-07-17 北京小米移动软件有限公司 Image pickup method, device, electronic equipment and storage medium
US20190289207A1 (en) * 2018-03-16 2019-09-19 Arcsoft Corporation Limited Fast scan-type panoramic image synthesis method and device
CN108566513A (en) * 2018-03-28 2018-09-21 深圳臻迪信息技术有限公司 A kind of image pickup method of unmanned plane to moving target
CN109580645A (en) * 2018-12-20 2019-04-05 深圳灵图慧视科技有限公司 Defects identification equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202023105500U1 (en) 2023-09-21 2023-10-17 Gesellschaft zur Förderung angewandter Informatik e. V. Device for detecting defects on a peripheral surface of a long metallic product passed through the device

Also Published As

Publication number Publication date
CN112601007B (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN110595999B (en) Image acquisition system
US11317681B2 (en) Automated identification of shoe parts
CN111914692B (en) Method and device for acquiring damage assessment image of vehicle
US11423566B2 (en) Variable measuring object dependent camera setup and calibration thereof
US20170277979A1 (en) Identifying defect on specular surfaces
CN110910459B (en) Camera device calibration method and device and calibration equipment
WO2007052859A1 (en) System and method for real-time calculating location
KR20210019014A (en) Method and plant for determining the location of a point on a complex surface of space
KR20130114899A (en) Image sensing method using dual camera and apparatus thereof
CN112307912A (en) Method and system for determining personnel track based on camera
CN108156359A (en) Intelligent industrial camera
CN112601007B (en) Image acquisition method and device for characteristic region
CN113504239B (en) Quality control data analysis method
CN111993420A (en) Fixed binocular vision 3D guide piece feeding system
JP2020036227A (en) Imaging timing control device, imaging system, and imaging method
CN111625001B (en) Robot control method and device and industrial robot
KR20230061612A (en) Object picking automation system using machine learning and method for controlling the same
Stroppa et al. Self-optimizing robot vision for online quality control
CN208623750U (en) Intelligent industrial camera
JP2020036226A (en) Imaging timing control device, imaging system, and imaging method
CN214846174U (en) Lighting device
CN117824624B (en) Indoor tracking and positioning method, system and storage medium based on face recognition
CN116027798B (en) Unmanned aerial vehicle power inspection system and method based on image correction
US20240028781A1 (en) Imaging condition adjusting device and imaging condition adjusting method
KR101754517B1 (en) System and Method for Auto Focusing of Curved Serface Subject

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant