CN114333010A - Image recognition method, image recognition device, storage medium and electronic equipment - Google Patents

Image recognition method, image recognition device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114333010A
CN114333010A CN202111628334.XA CN202111628334A CN114333010A CN 114333010 A CN114333010 A CN 114333010A CN 202111628334 A CN202111628334 A CN 202111628334A CN 114333010 A CN114333010 A CN 114333010A
Authority
CN
China
Prior art keywords
image
area
target object
region
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111628334.XA
Other languages
Chinese (zh)
Inventor
王洪
乔国坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang Aiwinn Information Technology Co Ltd
Original Assignee
Xinjiang Aiwinn Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang Aiwinn Information Technology Co Ltd filed Critical Xinjiang Aiwinn Information Technology Co Ltd
Priority to CN202111628334.XA priority Critical patent/CN114333010A/en
Publication of CN114333010A publication Critical patent/CN114333010A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an image recognition method, an image recognition device, a storage medium and electronic equipment. The image recognition method comprises the following steps: acquiring a first image, and determining a first area containing preset features in the first image, wherein the first image is an image shot by adopting a first shooting mode; acquiring a second image, determining a second area containing preset characteristics in the second image, wherein the second image is an image shot by adopting a second shooting mode, the shooting angle, the shooting time and the shooting scene of the first image and the second image are the same, and the first area is larger than the second area; determining a third area containing the second area in the second image, wherein the size of the third area is the same as that of the first area; replacing the first area with a third area and overlapping the target object in the first area with the target object in the third area; and identifying the first image after the area is replaced to obtain an identification result. The method and the device can improve the identification accuracy of the image.

Description

Image recognition method, image recognition device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image recognition, and in particular, to an image recognition method, an image recognition apparatus, a storage medium, and an electronic device.
Background
At present, an image recognition method is widely applied to face recognition and commodity recognition, however, in the image recognition process, some objects which interfere with recognition often appear in an image, and the recognition accuracy of the image is reduced.
Disclosure of Invention
The embodiment of the application provides an image identification method, an image identification device, a storage medium and electronic equipment, which can improve the identification accuracy of images.
In a first aspect, an image recognition method is provided, including:
acquiring a first image, and determining a first area containing preset features in the first image, wherein the first image is an image shot by adopting a first shooting mode;
acquiring a second image, and determining a second area containing the preset features in the second image, wherein the second image is an image shot by adopting a second shooting mode, the shooting angle, the shooting time and the shooting scene of the first image and the second image are the same, and the first area is larger than the second area;
determining a third region containing the second region in the second image, wherein the size of the third region is the same as that of the first region;
replacing the first area with the third area and overlapping the target object in the first area with the target object in the third area;
and identifying the first image after the area is replaced to obtain an identification result.
In one embodiment, the acquiring a first image and determining a first region containing a preset feature in the first image includes:
acquiring the first image;
acquiring the brightness value of a target object in the first image;
if the brightness value of the target object is larger than a preset brightness threshold value, determining that the preset feature exists in the first image;
determining a first region of the first image containing the preset feature.
In one embodiment, the acquiring the brightness value of the target object in the first image includes:
acquiring a target object and a target object in the first image;
and if the area of the target object is at least partially overlapped with the area of the target object, acquiring the brightness value of the target object.
In one embodiment, the determining a first region of the first image that includes the preset feature includes:
determining the position and the size of a target frame according to the position of the target object;
adjusting the size of the target frame to enable the target frame to cover the target object;
and taking the area covered by the target frame in the first image as the first area.
In one embodiment, the first photographing mode is a visible light photographing mode, and the second photographing mode is an infrared photographing mode.
In one embodiment, the determining, in the second image, a third region including the second region includes:
acquiring a preset mapping relation between pixels in the first image and pixels in the second image;
determining a plurality of target pixels in the second region through the preset mapping relation according to the plurality of pixels in the first region;
and determining the third area according to the plurality of target pixels.
In one embodiment, the replacing the first area with the third area and overlapping the target object in the first area with the target object in the third area includes:
and replacing the pixels in the first area with target pixels in the third area according to the preset mapping relation.
In a second aspect, an image recognition apparatus is provided, including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image and determining a first area containing preset characteristics in the first image, and the first image is an image shot by adopting a first shooting mode;
the second acquisition module is used for acquiring a second image and determining a second area containing the preset characteristics in the second image, wherein the second image is shot by adopting a second shooting mode, the shooting angle, the shooting time and the shooting scene of the first image and the second image are the same, and the first area is larger than the second area;
a determining module, configured to determine a third region including the second region in the second image, where the size of the third region is the same as the size of the first region;
a replacing module, configured to replace the first area with the third area, and overlap the target object in the first area with the target object in the third area;
and the identification module is used for identifying the first image after the area is replaced to obtain an identification result.
In a third aspect, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed on a computer, causes the computer to execute the image recognition method as described above.
In a fourth aspect, an electronic device is provided, comprising a memory and a processor, the processor being configured to execute the image recognition method as described above by invoking a computer program stored in the memory.
In the image identification method, a first image is obtained, and a first area containing preset features in the first image is determined; acquiring a second image, and determining a second area containing preset features in the second image, wherein the first area is larger than the second area; determining a third area containing the second area in the second image, and enabling the size of the third area to be the same as that of the first area; and then replacing the first area with a third area, so that the preset feature area in the first image after replacing the area is smaller than the preset feature area in the first image before replacing the area, therefore, the first image after replacing the area is adopted for image recognition, the interference of the preset feature on the image recognition can be reduced, and the image recognition accuracy can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a first image acquired according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a second image acquired according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a first image after replacing an area according to an embodiment of the present application;
fig. 5 is a schematic diagram of an image recognition apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flowchart of an image recognition method according to an embodiment of the present application. Referring to fig. 1, the image recognition method may include steps 100 to 500.
Step 100, acquiring a first image, and determining a first area containing a preset feature in the first image, wherein the first image is an image shot by adopting a first shooting mode.
Optionally, a first image is obtained, and whether the first image contains preset features or not is judged; when the first image contains the preset features, determining a first area according to the preset features, wherein the first area contains the preset features.
The first image is an image to be recognized, and may be obtained by shooting a scene through a camera. As shown in fig. 2, in the face recognition scene, the first image is a face image obtained by shooting a face with a camera.
The preset features are image features in the image that affect the recognition accuracy, and the preset features cause the recognition accuracy to be reduced. For example, in a face recognition scene, the preset features are reflective features in the image.
The first region is a region of the first image that includes a preset feature. Optionally, the first region is a preset feature region in the first image, that is, the area of the first region is equal to the area of the preset feature region in the first image.
In one embodiment, acquiring the first image and determining the first region containing the predetermined feature in the first image includes steps 110 to 140.
Step 110, a first image is acquired. For example, a first image is obtained by shooting a human face through a camera.
And step 120, acquiring the brightness value of the target object in the first image.
The target object is an object in the image having a correlation with a preset feature. For example, when the predetermined characteristic is a reflective characteristic, the target object may be glasses or an ornament that is easily reflective. It will be appreciated that glasses and some metal ornamentation are more reflective than the human face.
Optionally, acquiring the brightness value of the target object in the first image comprises steps 121 and 122.
And step 121, acquiring a target object in the first image.
And step 122, acquiring the brightness value of the target object. For example, the luminance value of the target object is determined from the luminance values of the respective pixels in the target object.
Optionally, the target object in the first image is acquired by a target object detection model. The target object detection model is a convolutional neural network model, such as a fast R-CNN-based target object detection model or a YOLO-based target object detection model.
Optionally, the target object in the first image is acquired by a method of keypoint detection. Since some target objects have a relative positional relationship with the object to be recognized. For example, in the process of face recognition, the target object is glasses, the object to be recognized is a face, the glasses and the face have a relative positional relationship, that is, the eye region of the face has a relative positional relationship with the glasses, so that a plurality of key points of the eyes of the face are obtained through the face key point detection model, the eye region in the first image is obtained according to the plurality of key points of the eyes of the face, and then the position of the glasses is determined according to the eye region.
Optionally, the first image is converted into a gray image, and then the brightness value of the target object is obtained according to the gray image. For example, the first image is converted into a gray image, gray-scale luminance values of pixels in the target object in the gray image are obtained, and an average value of the gray-scale luminance values of the pixels is calculated to obtain a luminance value of the target object.
Step 130, if the brightness value of the target object is greater than the preset brightness threshold, it is determined that the preset feature exists in the first image.
Optionally, a preset brightness threshold is obtained, and whether the brightness value of the target object is greater than the preset brightness threshold is determined. And when the brightness value of the target object is larger than the preset brightness threshold value, determining that the preset features exist in the first image.
It will be appreciated that the brightness value of the corresponding target object is greater in the presence of the light reflecting feature than in the absence of the light reflecting feature. Moreover, the more pronounced the presence of the light reflecting features in the target object, the greater the brightness value of the target object. Therefore, the brightness value of the target object can be compared with the preset brightness threshold value by judging, and when the brightness value of the target object is greater than the preset brightness threshold value, it is indicated that the light reflection feature exists in the image area corresponding to the target object.
Optionally, the preset brightness threshold is determined according to a specific scene. For example, in a face recognition scene, the preset brightness threshold is greater than or equal to 100 and less than or equal to 220, for example, the preset brightness threshold is 204.
Step 140, a first region containing a preset feature in the first image is determined.
Optionally, the first region is determined from a target object in the first image. For example, after the brightness value of the target object is determined to be greater than the preset brightness threshold, a first region containing the preset features in the first image is determined according to a region corresponding to the target object in the first image. It can be understood that the fact that the brightness value of the target object is greater than the preset brightness threshold indicates that the target object is in a reflective state, that is, the region corresponding to the target object in the first image is the preset feature region, and the region corresponding to the target object in the first image is taken as the first region, so that the first region becomes a region including the preset feature.
Optionally, the first region is determined from luminance values of pixels in the first image. For example, obtaining a brightness value of each pixel in the first image, and determining whether the brightness value of each pixel is greater than a preset brightness threshold; and then, according to the judgment result, obtaining a plurality of pixels with the brightness values larger than a preset brightness threshold value to obtain a plurality of first pixels. The first region is determined based on a position of the first pixel in the first image. It is understood that the first pixel is a pixel corresponding to the light reflecting feature.
Step 200, obtaining a second image, and determining a second area containing preset features in the second image, wherein the second image is an image shot by adopting a second shooting mode, the shooting angle, the shooting time and the shooting scene of the first image and the second image are the same, and the first area is larger than the second area.
For example, a second image is obtained, and whether the second image includes a preset feature is determined. And when the second image contains the preset features, determining the second area according to the preset features in the second image. And when the second image does not contain the preset features, determining that the second area is 0.
The shooting angle, the shooting time and the shooting scene of the second image are the same as those of the first image, so that the second image and the image content in the first image have consistency. It should be noted that the difference between the shooting angles of the second image and the first image is smaller than the preset shooting angle difference (e.g. 5 °) and the shooting time of the second image and the first image is smaller than the preset shooting time difference (e.g. 5 s).
The first shooting mode and the second shooting mode are two different shooting modes. Compared with the first shooting mode, the second shooting mode is more beneficial to reducing the preset features in the image. In one embodiment, the first shooting mode is a visible light shooting mode, and the second shooting mode is an infrared shooting mode. The visible light shooting mode is a mode of shooting a target scene by using a visible light camera, and a visible light image (a color image) can be correspondingly obtained. The infrared shooting mode is a mode of shooting a target scene by adopting an infrared camera, and an infrared image can be correspondingly obtained. It can be understood that the infrared image has almost no reflection phenomenon compared to the color image.
As shown in fig. 3, fig. 3 is a schematic diagram of a second image acquired by a second photographing mode. The second image shown in fig. 3 is identical to the first image shown in fig. 2 in shooting angle, shooting time, and shooting scene, but the second image shown in fig. 3 is an infrared image, and the first image shown in fig. 2 is a visible light image.
The second region is a region corresponding to the preset feature in the second image, that is, the area of the second region is equal to the area of the preset feature region in the second image.
Optionally, the second region is determined based on the luminance values of the respective pixels in the second image. For example, the brightness value of each pixel in the second image is obtained, and whether the brightness value of each pixel is greater than a preset brightness threshold is determined. And acquiring a plurality of pixels with the brightness values larger than a preset brightness threshold value to obtain a plurality of second pixels. The second region is determined based on the position of the second pixel in the second image.
Optionally, the first region is a preset feature region in the first image, the second region is a preset feature region in the second image, and an area ratio of the first region to the first image is greater than an area ratio of the second region to the second image.
And step 300, determining a third area containing the second area in the second image, wherein the size of the third area is the same as that of the first area.
Optionally, the third area is determined according to the second area, so that the third area covers the second area, and the size of the third area is the same as that of the first area. The size of the third region is the same as that of the first region, which means that the shape and size of the third region are the same as that of the first region, for example, the shape of the third region is square as that of the first region, and the number of pixels in the third region is the same as that in the first region.
In one embodiment, determining the third area containing the second area in the second image includes steps 310 to 330.
Step 310, obtaining a preset mapping relationship between the pixels in the first image and the pixels in the second image.
Optionally, a preset mapping relationship is established according to the pixel position corresponding relationship between the first image and the second image. For example, a preset mapping relationship is established by the corresponding relationship between the starting pixel at the upper left corner in the first image and the starting pixel at the upper left corner in the second image.
Optionally, a preset mapping relationship is established according to the content corresponding relationship between the first image and the second image. For example, a preset mapping relationship is established for the correspondence between the target object in the first image and the target object in the second image.
Step 320, determining a plurality of target pixels in the second region according to the plurality of pixels in the first region through a preset mapping relationship. For example, the target object pixels in the second region are confirmed according to the target object pixels in the first region, and a plurality of target pixels are obtained.
Step 330, determining a third area according to the plurality of target pixels. For example, from a plurality of target pixels, a region of the target object is determined, resulting in a third region containing the target object.
It can be understood that, since the image contents of the first region and the third region have consistency, replacing the first region in the first image with the third region can make the first image after replacing the region have consistency with the image contents of the original first image. For example, the first region is a region of a target object in the first image, the third region is a region of a target object in the second image, the region of the target object in the second image is replaced with the region of the target object in the first image, and the image content of the first image after the replacement of the region is the same as the image content of the first image before the replacement of the region.
Optionally, a third region in the second image is determined from the first region in the first image. For example, according to the correspondence between the image content in the first image and the image content in the second image, a third region corresponding to the first region is acquired in the second image, so that the content in the third region has consistency with the content in the first region. For another example, the position information of the first region in the first image is obtained, and the third region in the second image is determined according to the position information and the preset position corresponding relation.
Step 400, replacing the first area with a third area, and overlapping the target object in the first area with the target object in the third area.
According to the target object in the first area and the target object in the third area, the target object in the first area is replaced by the target object in the third area, and the situation that the image content of the first image is lack or the image is misplaced after the area is replaced can be reduced.
Compared with the first area, the third area has fewer preset feature areas, so that after the first area is replaced by the third area, the preset feature areas in the first image after the replacement area are smaller than those in the first image before the replacement area.
In one embodiment, replacing the first area with a third area and overlapping the target object in the first area with the target object in the third area includes step 400.
And 400, replacing the pixels in the first area with target pixels in the third area according to a preset mapping relation.
Optionally, extracting a third region from the second image; and replacing the pixels in the third region with the corresponding pixels in the first region according to the preset mapping relation between the pixels in the first region and the pixels in the third region, so that the target object in the first region is overlapped with the target object in the third region. For example, a third region is extracted from the second image, and pixels in the target object in the first region are replaced with corresponding pixels in the target object in the third region.
As shown in fig. 4, fig. 4 is a schematic diagram of the first image after replacing the area. The first image after replacing the area shown in fig. 4 is the image obtained by replacing the first area in the first image shown in fig. 2 with the third area in the second image shown in fig. 3.
And 500, identifying the first image after the area is replaced to obtain an identification result.
Because the preset feature area in the first image after the replacement area is less than the preset feature area in the first image before the replacement area, the interference of the preset feature on the image recognition is reduced, and the accuracy of the image recognition is improved.
In the image recognition method of the embodiment, a first image can be acquired, and a first area containing preset features in the first image is determined; acquiring a second image, and determining a second area containing preset features in the second image, wherein the first area is larger than the second area; determining a third area containing the second area in the second image, and enabling the size of the third area to be the same as that of the first area; and then replacing the first area with a third area, so that the preset feature area in the first image after replacing the area is less than the preset feature area in the first image before replacing the area, therefore, the first image after replacing the area is adopted for image recognition, the interference of the preset feature on the image recognition can be reduced, and the image recognition accuracy can be improved.
In one embodiment, obtaining the brightness value of the target object in the first image includes steps 610 and 620.
Step 610, a target object and a target object in the first image are acquired.
Optionally, judging whether a target object and a target object exist in the first image through the target object detection model and the target object detection model; and when the target object and the target object exist in the first image, acquiring the positions of the target object and the target object in the first image respectively. The target object detection model and the target object detection model are both trained neural network models.
It should be noted that the target object is an object to be recognized, and the target object is an object related to the preset feature. For example, in the process of face recognition, a face is a target object, and glasses capable of generating light reflection characteristics are the target object.
Step 620, if the area where the target object is located is at least partially overlapped with the area where the target object is located, obtaining the brightness value of the target object.
Optionally, judging whether the area where the target object is located and the area where the target object is located are at least partially overlapped according to the positions of the target object and the target object in the first image respectively; and when the area where the target object is located is at least partially overlapped with the area where the target object is located, acquiring the brightness value of the target object.
It can be understood that, when the area where the target object is located and the area where the target object is located are not overlapped, the identification accuracy of the target object in the identification process is not greatly affected. Only when the area where the target object is located is overlapped with the area where the target object is located, the identification accuracy of the target object in the identification process is greatly influenced.
Before the brightness value of the target object is obtained, whether the area where the target object is located is overlapped with the area where the target object is located is judged, the brightness value of the target object is obtained only when the area where the target object is located is at least partially overlapped with the area where the target object is located, the brightness value of the target object is not obtained when the area where the target object is located is not overlapped with the area where the target object is located, and therefore corresponding calculation amount can be reduced.
In one embodiment, determining the first region of the first image containing the predetermined feature includes steps 710 to 730.
And step 710, determining the position and the size of the target frame according to the position of the target object.
Optionally, determining the central position of the target object according to the position of the target object; acquiring a target frame; determining the size of the target frame according to the size of the target object; and (4) coinciding the preset central point of the target frame with the central position of the target object, and determining the position of the target frame.
Optionally, the target frame is shaped as a rectangular frame or a circular frame.
And 720, adjusting the size of the target frame to enable the target frame to cover the target object.
Optionally, the size of the target frame is increased or decreased according to the size of the target object, so that the target object is located within the coverage area of the target frame.
Step 730, the area covered by the target frame in the first image is used as the first area.
Optionally, an area covered by the target frame in the first image is acquired, and the area covered by the target frame in the first image is taken as the first area, so that the first area includes the target object.
In an application scene of face recognition, due to the reason of light, glasses of a person to be recognized are seriously reflected, so that reflection characteristics appear in a face eye area of an acquired visible light image, and the accuracy rate of the face recognition can be reduced by the reflection characteristics. The method collects the visible light image and the infrared image of the same scene at the same time and at the same angle; then, glasses (corresponding to the light reflection characteristic area of the glasses) in the visible light image are obtained, a corresponding target frame is obtained according to the glasses, the target frame covers the glasses in the visible light image, and the adjusted target frame is used as a first area; then mapping the first area to the infrared image according to a preset mapping relation, and obtaining a third area containing the second area in the infrared image; and then, replacing the third area with the first area in the visible light image to generate a replaced visible light image, so that no reflection characteristic area exists in the replaced visible light image, and the identification accuracy is improved.
Referring to fig. 5, the image recognition apparatus 80 includes a first obtaining module 81, a second obtaining module 82, a determining module 83, a replacing module 84, and a recognizing module 85.
The first obtaining module 81 is configured to obtain a first image, and determine a first area in the first image, where the first area includes a preset feature, and the first image is an image captured by using a first capturing mode.
The second obtaining module 82 is configured to obtain a second image, and determine a second area in the second image, where the second area includes a preset feature, the second image is an image shot by using a second shooting method, a shooting angle, a shooting time, and a shooting scene of the first image and the second image are the same, and the first area is larger than the second area.
A determining module 83, configured to determine a third region including the second region in the second image, where the size of the third region is the same as the size of the first region.
And a replacing module 84, configured to replace the first area with the third area and overlap the target object in the first area with the target object in the third area.
And the identifying module 85 is configured to identify the first image after the area is replaced, and obtain an identification result.
In one embodiment, the first obtaining module 81 is further configured to obtain a first image; acquiring the brightness value of a target object in a first image; if the brightness value of the target object is larger than the preset brightness threshold value, determining that preset features exist in the first image; a first region in the first image is determined that includes a preset feature.
In one embodiment, the first acquiring module 81 is further configured to acquire a target object and a target object in the first image; and if the area where the target object is located is at least partially overlapped with the area where the target object is located, acquiring the brightness value of the target object.
In one embodiment, the first obtaining module 81 is further configured to determine a position and a size of the target frame according to the position of the target object; adjusting the size of the target frame to enable the target frame to cover the target object; and taking the area covered by the target frame in the first image as a first area.
In one embodiment, the determining module 83 is further configured to obtain a preset mapping relationship between pixels in the first image and pixels in the second image; determining a plurality of target pixels in a second area through a preset mapping relation according to a plurality of pixels in the first area; a third region is determined based on the plurality of target pixels.
In one embodiment, the replacing module 84 is further configured to replace the pixel in the first region with the target pixel in the third region according to a preset mapping relationship.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image recognition method, and are not described herein again.
In one application scenario, the image recognition device is an access control device. The entrance guard equipment is provided with a visible light camera (a first acquisition module), a binocular camera integrated with an infrared camera (a second acquisition module), and a main control board integrated with a determination module, a replacement module and an identification module.
The image recognition device of the embodiment can acquire the first image and determine the first area containing the preset features in the first image; acquiring a second image, and determining a second area containing preset features in the second image, wherein the first area is larger than the second area; determining a third area containing the second area in the second image, and enabling the size of the third area to be the same as that of the first area; and then, replacing the first area with a third area, so that the preset feature area in the first image after the area is replaced is less than the preset feature area in the first image before the area is replaced, therefore, in the process of identifying the first image after the area is replaced, the interference influence of the preset feature can be reduced, and the identification accuracy of the image can be improved.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed on a computer, causes the computer to execute the image recognition method as above.
It should be noted that, for the readable storage medium of the embodiments of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the image recognition method of the embodiments of the present application can be implemented by controlling the related hardware through a computer program, the computer program can be stored in a computer readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment of the image recognition method can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
The embodiment also provides an electronic device comprising a memory and a processor, wherein the processor is used for executing the image recognition method by calling the computer program stored in the memory.
The electronic device provided in the embodiment of the present application and the image recognition method in the above embodiments belong to the same concept, and any method provided in the embodiment of the image recognition method may be executed on the electronic device, and the specific implementation process thereof is described in detail in the embodiment of the image recognition method, and is not described herein again.
In the electronic device according to the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing describes in detail an image recognition method, an image recognition apparatus, a storage medium, and an electronic device provided in embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the description of the foregoing embodiments is only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image recognition method, comprising:
acquiring a first image, and determining a first area containing preset features in the first image, wherein the first image is an image shot by adopting a first shooting mode;
acquiring a second image, and determining a second area containing the preset features in the second image, wherein the second image is an image shot by adopting a second shooting mode, the shooting angle, the shooting time and the shooting scene of the first image and the second image are the same, and the first area is larger than the second area;
determining a third region containing the second region in the second image, wherein the size of the third region is the same as that of the first region;
replacing the first area with the third area and overlapping the target object in the first area with the target object in the third area;
and identifying the first image after the area is replaced to obtain an identification result.
2. The image recognition method of claim 1, wherein the obtaining the first image and determining the first region of the first image containing the preset feature comprises:
acquiring the first image;
acquiring the brightness value of a target object in the first image;
if the brightness value of the target object is larger than a preset brightness threshold value, determining that the preset feature exists in the first image;
determining a first region of the first image containing the preset feature.
3. The image recognition method according to claim 2, wherein the obtaining of the brightness value of the target object in the first image comprises:
acquiring a target object and a target object in the first image;
and if the area of the target object is at least partially overlapped with the area of the target object, acquiring the brightness value of the target object.
4. The image recognition method of claim 2, wherein the determining the first region of the first image containing the preset feature comprises:
determining the position and the size of a target frame according to the position of the target object;
adjusting the size of the target frame to enable the target frame to cover the target object;
and taking the area covered by the target frame in the first image as the first area.
5. The image recognition method according to claim 1, wherein the first photographing mode is a visible light photographing mode, and the second photographing mode is an infrared photographing mode.
6. The image recognition method according to claim 1, wherein the determining a third region including the second region in the second image includes:
acquiring a preset mapping relation between pixels in the first image and pixels in the second image;
determining a plurality of target pixels in the second region through the preset mapping relation according to the plurality of pixels in the first region;
and determining the third area according to the plurality of target pixels.
7. The image recognition method according to claim 6, wherein the replacing the first region with the third region and overlapping the target object in the first region with the target object in the third region includes:
and replacing the pixels in the first area with target pixels in the third area according to the preset mapping relation.
8. An image recognition apparatus, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image and determining a first area containing preset characteristics in the first image, and the first image is an image shot by adopting a first shooting mode;
the second acquisition module is used for acquiring a second image and determining a second area containing the preset characteristics in the second image, wherein the second image is shot by adopting a second shooting mode, the shooting angle, the shooting time and the shooting scene of the first image and the second image are the same, and the first area is larger than the second area;
a determining module, configured to determine a third region including the second region in the second image, where the size of the third region is the same as the size of the first region;
a replacing module, configured to replace the first area with the third area, and overlap the target object in the first area with the target object in the third area;
and the identification module is used for identifying the first image after the area is replaced to obtain an identification result.
9. A computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to carry out an image recognition method according to any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, wherein the processor is configured to execute the image recognition method according to any one of claims 1 to 7 by calling a computer program stored in the memory.
CN202111628334.XA 2021-12-27 2021-12-27 Image recognition method, image recognition device, storage medium and electronic equipment Pending CN114333010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111628334.XA CN114333010A (en) 2021-12-27 2021-12-27 Image recognition method, image recognition device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111628334.XA CN114333010A (en) 2021-12-27 2021-12-27 Image recognition method, image recognition device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114333010A true CN114333010A (en) 2022-04-12

Family

ID=81015878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111628334.XA Pending CN114333010A (en) 2021-12-27 2021-12-27 Image recognition method, image recognition device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114333010A (en)

Similar Documents

Publication Publication Date Title
US11948282B2 (en) Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
CN110232369B (en) Face recognition method and electronic equipment
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
CN108846807B (en) Light effect processing method and device, terminal and computer-readable storage medium
EP3644599B1 (en) Video processing method and apparatus, electronic device, and storage medium
CN108416291B (en) Face detection and recognition method, device and system
CN101983507A (en) Automatic redeye detection
CN111491106B (en) Shot image processing method and device, mobile terminal and storage medium
CN111414858A (en) Face recognition method, target image determination method, device and electronic system
CN111553302B (en) Key frame selection method, device, equipment and computer readable storage medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113688820A (en) Stroboscopic stripe information identification method and device and electronic equipment
CN110770786A (en) Shielding detection and repair device based on camera equipment and shielding detection and repair method thereof
CN112052726A (en) Image processing method and device
CN111881846A (en) Image processing method and related device, equipment and storage medium
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
CN108805883B (en) Image segmentation method, image segmentation device and electronic equipment
CN114333010A (en) Image recognition method, image recognition device, storage medium and electronic equipment
CN110826376B (en) Marker identification method and device, terminal equipment and storage medium
CN106402717B (en) A kind of AR control method for playing back and intelligent desk lamp
CN111723614A (en) Traffic signal lamp identification method and device
CN111935480B (en) Detection method for image acquisition device and related device
CN111462294B (en) Image processing method, electronic equipment and computer readable storage medium
CN114332981A (en) Face living body detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination