CN117011830A - Image recognition method, device, computer equipment and storage medium - Google Patents

Image recognition method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117011830A
CN117011830A CN202311029663.1A CN202311029663A CN117011830A CN 117011830 A CN117011830 A CN 117011830A CN 202311029663 A CN202311029663 A CN 202311029663A CN 117011830 A CN117011830 A CN 117011830A
Authority
CN
China
Prior art keywords
image
filling
pattern
detection area
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311029663.1A
Other languages
Chinese (zh)
Other versions
CN117011830B (en
Inventor
穆阳
刘志州
梁耀聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microbrand Technology Zhejiang Co ltd
Original Assignee
Microbrand Technology Zhejiang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microbrand Technology Zhejiang Co ltd filed Critical Microbrand Technology Zhejiang Co ltd
Priority to CN202311029663.1A priority Critical patent/CN117011830B/en
Publication of CN117011830A publication Critical patent/CN117011830A/en
Application granted granted Critical
Publication of CN117011830B publication Critical patent/CN117011830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image recognition method, an image recognition device, computer equipment and a storage medium. The method comprises the following steps: acquiring an external image to be detected of a vehicle, wherein the external image to be detected comprises at least part of a vehicle body image of the vehicle; filling a non-detection area in an external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where at least part of the vehicle body is located; identifying the first filling image to obtain a first identification result, wherein the result comprises: the non-detection area in the first filling image has an obstacle, or the non-detection area in the first filling image has no obstacle; at least part of the car body in the first filling image is filled by the first filling sample, so that the influence of a car body reflection picture and a car body pattern on the identification accuracy is avoided, and the image identification accuracy is improved.

Description

Image recognition method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an image recognition method, apparatus, computer device, storage medium, and computer program product.
Background
With the development of artificial intelligence technology, technology for processing images based on artificial intelligence has emerged, and with the development of the age, there is an increasing field in which image recognition technology is used, for example, by recognizing images to determine whether there are risk factors that affect safe driving around a vehicle.
In the conventional technology, for example, in an intelligent blind area detection system of a large bus or other vehicles, a camera is installed around the vehicle body to collect an image (or one frame of a continuous video stream, which is also referred to as an image hereinafter) around the vehicle body, and the collected image is transmitted to an intelligent recognition processor to recognize the image. One or more warning areas are preset in the acquisition picture of the camera, but in the image recognition process, the images outside the warning areas are not naturally non-detection areas in most cases, but when the non-detection areas comprise the car body picture, factors such as reflection shadows, car shadows and patterns of the car body in the car body picture can possibly cause the factors to be judged as dangerous factors in the image recognition process, but in practice, the factors are not dangerous factors, so that the accuracy of the image recognition is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image recognition method, apparatus, computer device, computer-readable storage medium, and computer program product that can improve the accuracy of image recognition.
In a first aspect, the present application provides an image recognition method, the method comprising:
acquiring an external image to be detected of a vehicle, wherein the external image to be detected comprises at least part of a vehicle body image of the vehicle;
filling a non-detection area in the external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where the at least part of the vehicle body is located;
identifying the first filling image to obtain a first identification result, wherein the first identification result comprises: an obstacle exists in the non-detection area in the first filled image, or the non-detection area in the first filled image is free of an obstacle;
wherein the first fill pattern comprises a pattern or a color.
In one embodiment, the image recognition method further includes, in a case where the non-detection area in the first filling image has the obstacle, the first recognition result:
filling the non-detection area in the first filling image with a second filling sample to form a second filling image;
identifying the second filling image to obtain a second identification result, wherein the second identification result comprises: an obstacle exists in the non-detection area in the second filled image, or the non-detection area in the second filled image is free of an obstacle;
wherein the second fill pattern is different from the first fill pattern, the second fill pattern comprising a pattern or a color.
In one embodiment, at least one of the following is also included:
outputting a first alert prompt prior to said filling of said non-detection region in said first fill image with a second fill sample;
outputting a second alarm prompt when the second recognition result is that the non-detection area in the second filling image has the obstacle;
and outputting a false alarm prompt when the second recognition result is that the non-detection area in the second filling image does not have the obstacle.
In one embodiment, the external image to be detected further comprises a pattern area connected with the non-detection area, and the pattern area is not overlapped with the non-detection area;
before the filling of the non-detection area in the external image to be detected with the first filling sample, the method further comprises:
acquiring pattern or color information of the pattern area;
determining the first fill pattern based on pattern or color information of the pattern area; the approximation degree of the pattern of the first filling sample and the pattern of the pattern area is lower than a set first threshold value, or the difference between the color of the first filling sample and the color of the main component of the color of the pattern area is larger than a set second threshold value.
In one embodiment, before the filling the non-detection region in the first filling image with the second filling sample, the method further includes:
acquiring pattern or color information of the first filling sample;
determining the second filling sample based on the first filling sample and the pattern or color information of the pattern area; the approximations of the patterns of the second filling sample, the patterns of the first filling sample and the patterns of the pattern area are smaller than a set third threshold value, or the colors of the second filling sample and the first filling sample are complementary colors and are different from the colors of the main components of the colors of the pattern area.
In one embodiment, the identifying the first filling image, to obtain a first identification result, includes:
identifying the first filling image, and outputting the first identification result of the obstacle existing in the non-detection area in the first filling image under the condition that the obstacle is at least partially overlapped with the non-detection area or under the condition that the overlapping area of the obstacle and the non-detection area is identified to be larger than a first preset value in the non-detection area;
the step of identifying the second filling image to obtain a second identification result comprises the following steps:
and identifying the second filling image, and outputting the second identification result of the obstacle existing in the non-detection area in the second filling image under the condition that the obstacle is at least partially overlapped with the non-detection area or the overlapping area of the obstacle and the non-detection area is identified to be larger than a second preset value.
In one embodiment, the external image to be detected further comprises a pattern area connected with the non-detection area, and the pattern area is not overlapped with the non-detection area;
Before the filling of the non-detection area in the external image to be detected with the first filling sample, the image recognition method further includes:
acquiring a plurality of external images of the vehicle acquired by the image acquisition device, and extracting the pattern areas in the external images;
extracting a pattern and/or a color sample based on the pattern areas of the external images, wherein the pattern is a pattern with occurrence probability larger than a preset probability in the pattern areas of the external images, and the color sample is a color with the duty ratio larger than a set fourth threshold value in the pattern areas of the external images compared with all the pattern areas;
determining the first filling sample based on the pattern sample or the color sample, wherein the pattern of the first filling sample is different from the pattern of the pattern sample, and the color of the first filling sample is different from the color of the color sample.
In a second aspect, the present application also provides an image recognition apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an external image to be detected of a vehicle, wherein the external image to be detected comprises at least part of a vehicle body image of the vehicle;
The image filling module is used for filling the non-detection area in the external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where the at least part of the vehicle body is located;
the image recognition and result output module is used for recognizing the first filling image to obtain a first recognition result, and the first recognition result comprises: an obstacle exists in the non-detection area in the first filled image, or the non-detection area in the first filled image is free of an obstacle;
wherein the first fill pattern comprises a pattern or a color.
In a third aspect, the present application also provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an external image to be detected of a vehicle, wherein the external image to be detected comprises at least part of a vehicle body image of the vehicle;
Filling a non-detection area in the external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where the at least part of the vehicle body is located;
identifying the first filling image to obtain a first identification result, wherein the first identification result comprises: an obstacle exists in the non-detection area in the first filled image, or the non-detection area in the first filled image is free of an obstacle;
wherein the first fill pattern comprises a pattern or a color.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an external image to be detected of a vehicle, wherein the external image to be detected comprises at least part of a vehicle body image of the vehicle;
filling a non-detection area in the external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where the at least part of the vehicle body is located;
Identifying the first filling image to obtain a first identification result, wherein the first identification result comprises: an obstacle exists in the non-detection area in the first filled image, or the non-detection area in the first filled image is free of an obstacle;
wherein the first fill pattern comprises a pattern or a color.
In a fifth aspect, the application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of:
acquiring an external image to be detected of a vehicle, wherein the external image to be detected comprises at least part of a vehicle body image of the vehicle;
filling a non-detection area in the external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where the at least part of the vehicle body is located;
identifying the first filling image to obtain a first identification result, wherein the first identification result comprises: an obstacle exists in the non-detection area in the first filled image, or the non-detection area in the first filled image is free of an obstacle;
Wherein the first fill pattern comprises a pattern or a color.
The image recognition method, the image recognition device, the computer equipment, the storage medium and the computer program product form the first filling image by defining the non-detection area including at least part of the vehicle body and filling the non-detection area in the external image to be detected by using the first filling sample, so that at least part of the vehicle body in the first filling image is filled by the first filling sample, the non-detection area recognized in the image recognition process is of a specific pattern or color, the influence of a reflective picture and a vehicle body pattern existing in at least part of the vehicle body on the recognition accuracy is avoided, and the accuracy of image recognition is improved.
Drawings
FIG. 1 is an application environment diagram of image recognition provided by an embodiment of the present application;
fig. 2 is a flowchart of a first embodiment of an image recognition method according to an embodiment of the present application;
fig. 3 is a flowchart of a second embodiment of an image recognition method according to an embodiment of the present application;
fig. 4 is a flowchart of a third embodiment of an image recognition method according to an embodiment of the present application;
fig. 5 is a flowchart of a fourth embodiment of an image recognition method according to an embodiment of the present application;
Fig. 6 is a flowchart of a fifth embodiment of an image recognition method according to an embodiment of the present application;
fig. 7 is a flowchart of a sixth embodiment of an image recognition method according to an embodiment of the present application;
fig. 8 is a flowchart of a seventh embodiment of an image recognition method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an image recognition device according to an embodiment of the present application;
fig. 10 is an internal structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the conventional technology, for example, in an intelligent blind area detection system of a large bus or other vehicles, a camera is installed around the vehicle body to collect an image (or one frame of a continuous video stream, which is also referred to as an image hereinafter) around the vehicle body, and the collected image is transmitted to an intelligent recognition processor to recognize the image. One or more warning areas are preset in the acquisition picture of the camera, but in the image recognition process, the images outside the warning areas are not naturally non-detection areas in most cases, but when the non-detection areas comprise the car body picture, factors such as reflection shadows, car shadows and patterns of the car body in the car body picture can possibly cause the factors to be judged as dangerous factors in the image recognition process, but in practice, the factors are not dangerous factors, so that the accuracy of the image recognition is low.
In addition, if the acquired image always contains a part of the vehicle body, in the working process, the factors influencing the image recognition accuracy always exist and cannot disappear due to the movement of the vehicle, so that the hazard is more serious than other factors causing false alarm.
Based on the above conventional technology, the image recognition method provided by the embodiment of the application forms the first filling image by demarcating that the non-detection area includes at least part of the vehicle body and filling the non-detection area in the external image to be detected with the first filling sample, so that at least part of the vehicle body in the first filling image is filled with the first filling sample, and the non-detection area recognized in the image recognition process is a specific pattern or color, thereby avoiding the influence of the reflective image and the vehicle body pattern existing in at least part of the vehicle body on the recognition accuracy, and further improving the accuracy of image recognition.
It should be noted that the beneficial effects or the technical problems to be solved by the embodiments of the present application are not limited to this one, but may be other implicit or related problems, and particularly, reference may be made to the following description of embodiments.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is an application environment diagram of image recognition according to an embodiment of the present application. The image recognition method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The vehicle-mounted terminal 12 is in communication connection with the image acquisition device 13, and can be responsible for acquiring images acquired and sent by the image acquisition device 13 installed on the vehicle, so as to process the images, or transmit the images to the server 14 for processing. The data storage system can store data to be processed by the vehicle-mounted terminal 12 or the server 14, and can be integrated on the vehicle-mounted terminal 12 or the server 14 or placed on a cloud or other network servers; the data storage system may be used, for example, to store acquired data such as external images to be detected. The in-vehicle terminal 12 may be, but not limited to, various devices of the internet of things, for example, intelligent in-vehicle devices. The server 14 may be implemented as a stand-alone server or as a cluster of servers.
Fig. 2 is a flowchart of an image recognition method according to an embodiment of the present application, in an embodiment, as shown in fig. 2, an image recognition method is provided, and an application environment of the method in fig. 1 is taken as an example to describe the method, which includes the following method steps:
Step 101, an external image to be detected of a vehicle is acquired, the external image to be detected comprising at least part of a body image of the vehicle.
Specifically, the image recognition method provided by the present application at least includes steps 101 to 103, where step 101 is used to obtain an external image to be detected of a vehicle, where the external image to be detected may be obtained by any image capturing device, and the external image to be detected may be a picture directly captured by the image capturing device or may be a frame of picture in a video stream captured by the image capturing device. It should be noted that, the image capturing device for capturing the external image to be detected of the vehicle may be fixed on the external surface of the vehicle, and the image captured by the capturing device may include the external body of the vehicle, for example, the image capturing device may be placed in a rearview mirror or mounted around the rearview mirror, but this is only two alternative embodiments provided by the present application, and the present application is not limited thereto, as long as the mounted image capturing device can capture the external image to be detected of the vehicle.
Further, the present application provides an alternative embodiment, in which at least part of the vehicle body image of the vehicle is included in the external image to be detected of the vehicle obtained in step 101, for example, at least part of the left side vehicle body image of the vehicle is included in the external image to be detected, or at least part of the right side vehicle body image of the vehicle is included in the external image to be detected, and of course, at least part of the left side vehicle body image and at least part of the right side vehicle body image may be included in the external image to be detected at the same time if required.
Step 102, filling a non-detection area in an external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area includes an image area in which at least a portion of the vehicle body is located.
Specifically, after the external image to be detected of the vehicle is obtained through step 101, the external image to be detected may be further processed through step 102, and the present application provides an alternative embodiment, in which a non-detection area, that is, at least a partial area outside the alarm area, in the external image to be detected is defined, and then the non-detection area in the external image to be detected is filled with a first filling sample, so that a picture of the defined non-detection area can be covered by a specific first filling sample, thereby forming a first filling image.
The non-detection area in the external image to be detected is determined based on the target image, wherein the target image is an external image of the vehicle acquired in the same direction of the acquired image through the same image acquisition device before the external image to be detected is acquired, and the non-detection area comprises at least part of the image area where the vehicle body is located.
Because the non-detection area comprises at least part of the image area where the vehicle body is located, after the non-detection area in the external image to be detected is filled with the first filling sample in the step 102, the method is equivalent to filling at least part of the vehicle body in the external image to be detected with the first filling sample, namely covering at least part of the obstacles such as the shadows, the tree shadows and the like reflected by the vehicle body and the patterns carried by the vehicle body with the first filling sample, so that the influence of the shadows, the tree shadows and the like reflected by the vehicle body and the patterns carried by the vehicle body on the image recognition accuracy can be avoided, the effect of image recognition is improved, the influence of false recognition information on the driving safety of a user can be avoided, and the user experience is improved. When the non-detection area is defined to comprise all the car body parts in the external image to be detected, the first filling sample can completely cover the obstacles such as the shadow, the car shadow and the tree shadow reflected by the car body in the external image to be detected and the patterns carried by the car body, so that the image recognition accuracy is improved more favorably.
It should be further noted that, the present application does not specifically limit the style, color, etc. of the first filling sample, where the first filling sample includes a pattern or a color, and specific pattern lines, colors, etc. of the first filling sample can be selected according to the needs.
Step 103, identifying the first filling image to obtain a first identification result, wherein the first identification result comprises: the non-detection areas in the first fill image are clear of obstructions or the non-detection areas in the first fill image are clear of obstructions.
After step 101 and step 102, the first filling image after the non-detection area is filled with the first filling sample may be further identified through step 103, so as to obtain a first identification result, where the first identification result includes at least two types, the first type is that the non-detection area in the first filling image has an obstacle, and the second type is that the non-detection area in the first filling image has no obstacle. When the non-detection area in the first filling image has an obstacle, the method comprises two cases, wherein one case is that the whole obstacle falls into the non-detection area of the first filling image, and the other case is that the part of the obstacle falls into the non-detection area of the first filling image.
According to the application, through the steps 101-103, the non-detection area in the external image to be detected is defined to comprise at least part of the vehicle body by acquiring the external image to be detected of the vehicle, and the non-detection area in the external image to be detected is filled with the first filling sample to form the first filling image, so that at least part of the vehicle body in the first filling image is filled with the first filling sample, the non-detection area identified in the image identification process is of a specific pattern or color, the influence of the reflective picture and the vehicle body pattern existing in at least part of the vehicle body on the identification accuracy is avoided, and the accuracy of image identification is improved.
It should be further noted that the first filling pattern provided in the present application includes a pattern or a color, which is only two alternative embodiments provided in the present application, but the present application is not limited thereto, and the first filling pattern may also be selectively provided to include both a pattern and a color.
Fig. 3 is another flowchart of an image recognition method according to an embodiment of the present application, referring to fig. 3, in some embodiments, when the first recognition result is that there is an obstacle in a non-detection area in the first filling image, the image recognition method further includes:
step 104, filling the non-detection area in the first filling image with a second filling sample to form a second filling image.
Specifically, after the steps 101 to 103 are performed, in order to further improve the accuracy of image recognition, the present application further provides an alternative embodiment, where the image recognition method further includes steps 104 to 105, and in a case where the first recognition result is that there is an obstacle in the non-detection area in the first filling image, the non-detection area in the first filling image may be further filled with a second filling sample through step 104 to form a second filling image; it is to be added that the second filling pattern used herein needs to be different from the first filling pattern, so that the obstacle existing in the non-detection area can be covered again, the influence of the reflective picture and the vehicle body pattern existing in the vehicle body in the non-detection area on the recognition accuracy is further avoided, and the effect of improving the image recognition accuracy is achieved.
The second filling sample may also include a pattern or a color, and the present application is not limited to the pattern or the color of the second filling sample, as long as the pattern or the color selected by the second filling sample can be different from the pattern or the color selected by the corresponding first filling sample, so as to achieve the effect of covering the obstacle existing in the non-detection area again.
Step 105, identifying the second filling image to obtain a second identification result, where the second identification result includes: the non-detection area in the second filling image has an obstacle, or the non-detection area in the second filling image has no obstacle;
wherein the second fill pattern is different from the first fill pattern, the second fill pattern comprising a pattern or a color.
Specifically, step 105 is configured to identify the second filling image obtained in step 104 to obtain a second identification result, where the second identification result includes at least two types, a first type is that a non-detection area in the second filling image has an obstacle, and a second type is that the non-detection area in the second filling image has no obstacle.
Through the steps 101-104, the secondary identification of the external image to be detected of the vehicle is realized, namely, when the first filling image filled with the first filling sample is identified, and the identification result is that the non-detection area in the first filling image has an obstacle, the second filling image obtained after the non-detection area in the first filling image is filled with the second filling sample can be further identified, namely, the non-detection area in the first filling image is secondarily filled with the second filling sample, so that the secondary coverage of the obstacle existing in the non-detection area is facilitated, the influence of the reflective image and the vehicle body pattern existing in the vehicle body of the non-detection area on the identification accuracy is further avoided, the accuracy of image identification is further improved, the influence of misidentification information on the driving safety of a user can be avoided, and the user experience is improved.
It should be further noted that the second filling pattern provided in the present application includes a pattern or a color, which is only two alternative embodiments provided in the present application, but the present application is not limited thereto, and the second filling pattern may be alternatively provided to include both a pattern and a color.
Fig. 4 is another flowchart of an image recognition method provided by an embodiment of the present application, and fig. 5 is another flowchart of an image recognition method provided by an embodiment of the present application, referring to fig. 4 and fig. 5, in some implementations, step 201 is executed before a non-detection area in a first filling image is filled with a second filling sample, and a first alarm prompt is output;
executing step 202 to output a second alarm prompt when the second recognition result is that the non-detection area in the second filling image has an obstacle;
and executing step 203 to output false alarm prompt when the second recognition result is that no obstacle exists in the non-detection area in the second filling image.
Specifically, on the basis of step 101-step 104, the application also provides an alternative implementation manner, wherein the image recognition method can further comprise at least one of the following steps 201, 202 and 203; before the step 104 is executed, that is, in the case that the first recognition result obtained in the step 103 is that the non-detection area in the first filling image has an obstacle, a first alarm prompt may be output to the vehicle through the step 201, so that the driver of the vehicle can know that the vehicle driven by the vehicle may have an obstacle with a risk factor outside the vehicle, thereby reminding the driver of the need to pay attention to the situation outside the vehicle, so as to avoid a traffic accident. It should be added that, as shown in fig. 4 and 5, step 201 only occurs when the recognition result in step 103 is that there is an obstacle in the non-detection area in the first filling image.
In addition, when the first recognition result obtained in step 103 is that no obstacle exists in the non-detection area in the first filling image, the recognition result may be directly output. After the identification of one external image to be detected is finished, that is, after the identification result is output, the next external image to be detected can be further identified, and the process of identifying the next external image to be detected at least includes the steps 101-103, then the identification result is output, or at least includes the steps 101-105, then the identification result is output. It should be further added that, for the image that is acquired by the same image acquisition device in a positioning manner, as described in step 109, the non-detection area may be set according to any image (target image) acquired by the image acquisition device, and the non-detection area of the image that is acquired by the same image acquisition device in a positioning manner may be set according to the non-detection area set in the target image, so after the non-detection area is set, the corresponding non-detection area in the external image to be detected may be directly filled in step 102 through the first filling sample, which avoids the situation that the non-detection area is defined for different images, and is beneficial to improving the image recognition efficiency.
It should be noted that, fig. 5 may be referred to in conjunction with fig. 4, and the subsequent flow is not shown in fig. 5 when it is determined that the first recognition result is that the non-detection area of the first filling image has no obstacle, but the output of the result is not necessarily performed when the first recognition result is that the non-detection area of the first filling image has no obstacle, and the recognition result may be selected and output according to the requirement, which is not particularly limited in the present application. As shown in fig. 5, the output after step 105 may include step 202 outputting a second alert prompt, and step 203 outputting a false positive prompt. Referring to fig. 4, after step 202 and step 203, a next external image to be detected of the vehicle may be further acquired, so as to identify whether an obstacle exists in the non-detection area.
As shown in fig. 5, when the second recognition result obtained in step 105 is that the non-detection area in the second filling image has an obstacle, it is considered that the obstacle in the non-detection area is actually present, and step 202 may be performed to output a second alarm prompt to the vehicle, so that the driver of the vehicle can know that the obstacle with the risk factor may exist outside the vehicle driven by the driver, thereby reminding the driver of the need to keep track of the situation outside the vehicle, so as to avoid the traffic accident.
As shown in fig. 5, when the second recognition result obtained in step 105 is that no obstacle exists in the non-detection area in the second filling image, it may be considered that the first recognition result is that the obstacle exists in the non-detection area in the first filling image is a false alarm, and the false alarm prompt may be output to the vehicle by executing step 203, specifically, the first alarm prompt output in step 201 may be an error, informing the driver that no obstacle exists in the periphery of the vehicle, so that the driver can continue to drive safely, the driving safety is improved, and the problem of driver parking inspection caused by false alarm can be avoided, so that the waste of driver time caused by image false recognition is avoided.
It should be further noted that, step 201 is not necessarily performed, for example, in a case where the first recognition result is that there is an obstacle in the non-detection area in the first filling image, steps 104 and 105 may be directly performed, and then step 202 is performed to output a second alarm prompt to the vehicle; that is, under the condition that the first recognition result is that the non-detection area in the first filling image has an obstacle, the second recognition work of the image to be detected is directly executed, so that the image recognition efficiency is improved, and the situation that the first recognition result is that the non-detection area has the obstacle and is misrecognition is avoided, a prompt is sent to the vehicle, so that the influence of the misrecognition prompt on the state of a driver is avoided, and the driving safety of the vehicle is improved. That is, steps 201, 202, 203 shown in fig. 5 are not necessarily all present in the image recognition method, and one, two, or three of the three steps may be selected to be present in the image recognition method according to the need.
It should be further added that, when the first filling image is identified in step 103, and the obtained first identification result is that no obstacle exists in the non-detection area in the first filling image, no prompt information can be optionally transmitted to the driver, that is, no identification result is output, so that influence on the driving process of the driver is avoided, and driving safety of the vehicle is ensured. Of course, the application is not limited thereto, and the driver may be prompted to select that no obstacle exists in the non-detection area in the first filling image as the first recognition result under the limiting conditions such as the selection time or the selection area according to the requirement.
The application also needs to be supplemented, the specific implementation modes of the alarm prompt and the false alarm prompt are not limited, for example, prompt information can be provided for a driver in a mode of a display screen in the vehicle, vehicle light, sound and the like; in addition, when the first alarm prompt is output in the execution step 201 and/or the second alarm prompt is output in the execution step 202, the running speed of the vehicle is controlled to be reduced or even braked, so that the driving safety of the vehicle can be further improved.
Fig. 6 is another flowchart of the image recognition method provided in the embodiment of the present application, please refer to fig. 6 in conjunction with fig. 4 and 5, it should be noted that, in fig. 6, a judgment frame is not shown between step 103 and step 201, and referring to fig. 4 and 5, only when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has an obstacle, step 201 outputs a first alarm prompt, and fig. 6 does not show the judgment frame only for layout reasons, and does not represent that step 201 is executed when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has no obstacle, as shown in fig. 4, step 201 is not executed when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has no obstacle, and the recognition result is output. In some embodiments, the external image to be detected further includes a pattern area contiguous with the non-detection area, and the pattern area does not overlap with the non-detection area;
Before filling the non-detection area in the external image to be detected with the first filling sample, the method further comprises:
step 301, obtaining pattern or color information of a pattern area;
step 302, determining a first filling sample based on pattern or color information of a pattern area; the approximation degree of the pattern of the first filling sample and the pattern of the pattern area is lower than a set first threshold value, or the difference between the color of the first filling sample and the color of the main component of the color of the pattern area is larger than a set second threshold value.
Specifically, the external image to be detected collected in step 101 includes, in addition to the non-detection area, a warning area and/or other areas, where the warning area is an area that needs to output a warning prompt to be known to the driver when the obstacle exists, that is, the obstacle appearing in the warning area is very likely to affect the driving safety. The warning area and/or other areas include a pattern area bordering the non-detection area, the pattern area not overlapping the non-detection area.
Based on this, the present application also provides an alternative embodiment, before performing step 102 of filling the non-detection area in the external image to be detected with the first filling sample, steps 301 and 302 may be further included, where step 301 is used to obtain specific patterns or color information of a pattern area connected to the non-detection area, specifically, when the pattern area includes pure colors or fewer kinds of colors, the color information of the pattern area may be selectively obtained directly, and if the color or pattern of the pattern area is complex, the pattern of the pattern area may be selectively obtained; further, step 302 is executed, where the execution content of step 302 is that a first filling sample is determined based on the pattern or the color information of the pattern area obtained in step 301, where the approximation degree of the pattern of the first filling sample and the pattern of the pattern area is lower than a set first threshold value, or the difference between the color of the first filling sample and the color of the main component of the color of the pattern area is greater than a set second threshold value; therefore, the pattern or the color of the first filling sample is selected, so that the first filling sample can form obvious difference with the pattern area, the covering effect of the first filling sample on the non-detection area is improved, particularly, the shielding effect of the first filling sample on the reflective image outside the vehicle body in the non-detection area and the pattern of the vehicle body is improved, the accuracy of the first identification result is further improved, the execution frequency of the step 104 and the step 105 can be reduced, and the image identification efficiency is further improved under the condition that the driving safety of the vehicle is guaranteed.
It should be noted that, specific values corresponding to the first threshold and the second threshold in step 302 are not specifically limited, and the sizes of the first threshold and the second threshold may be adjusted according to actual design requirements, so long as accuracy of image recognition can be guaranteed.
It should be further added that, after the first recognition result is obtained in step 103, step 104 and step 105 are directly further executed, that is, the recognition result is output after the external image to be detected is directly recognized twice, so that the recognition accuracy of the image is further improved; of course, this is only an alternative embodiment provided by the present application, which is not limited thereto, and this example is not shown in the drawings.
Fig. 7 is another flowchart of the image recognition method provided in the embodiment of the present application, please refer to fig. 7 in conjunction with fig. 4 and 5, it should be noted that, in fig. 7, a judgment frame is not shown between step 103 and step 201, and referring to fig. 4 and 5, only when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has an obstacle, step 201 outputs a first alarm prompt, and fig. 7 does not show the judgment frame only for layout reasons, and does not represent that step 201 is executed when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has no obstacle, as shown in fig. 4, step 201 is not executed when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has no obstacle, and the recognition result is output. In some embodiments, before filling the non-detection region in the first filled image with the second filling sample, further comprising:
Step 401, obtaining pattern or color information of a first filling sample;
step 402, determining a second filling sample based on the first filling sample and the pattern or color information of the pattern area; the similarity of the patterns of the second filling sample and the patterns of the first filling sample and the patterns of the pattern areas are smaller than a set third threshold value, or the colors of the second filling sample and the first filling sample are complementary colors and are different from the colors of the main components of the colors of the pattern areas.
Specifically, the present application further provides an alternative embodiment, after the first recognition result is that the non-detection area in the first filling image has an obstacle, before performing step 104 to fill the non-detection area in the first filling image with the second filling sample, the method may further include step 401 and step 402, and specific patterns or color information of the first filling sample are obtained through step 401, and similarly, when the first filling sample includes pure colors or fewer types of colors, the method may select to directly obtain the color information of the first filling sample, and if the color or pattern of the first filling sample is complex, the method may select to obtain the patterns of the first filling sample; further, step 402 is executed, where the content of step 402 is that, based on the first filling pattern and the pattern or color information of the pattern area obtained in step 301 and step 401, a second filling pattern is determined, where the approximations of the pattern of the second filling pattern and the pattern of the first filling pattern and the pattern of the pattern area are smaller than a set third threshold value, or the color of the second filling pattern and the color of the first filling pattern are complementary colors and are different from the color of the main component of the color of the pattern area; therefore, the pattern or the color of the second filling sample is selected, so that the second filling sample can form obvious differences with the first filling sample and the pattern area, the covering effect of the second filling sample on the non-detection area is improved, particularly, the shielding effect of the second filling sample on the reflective picture outside the vehicle body in the non-detection area and the pattern of the vehicle body is improved, and the accuracy of the second identification result is further improved.
It should be noted that, the specific value of the third threshold in step 302 is not specifically limited, and the size of the third threshold may be adjusted according to the actual design requirement, so long as the accuracy of image recognition can be guaranteed.
It is also desirable that the complementary colors be selected based on the art complementary colors, wherein the complementary colors in the art refer to two colors that make an angle of 180 ° in the red-yellow-blue (RYB) hue circle. Wherein RYB is Red, yellow, blue. Of course, the fact that the color of the second filling sample and the color of the first filling sample are complementary is only an alternative embodiment provided by the present application, but the present application is not limited thereto, as long as the color of the second filling sample and the color of the first filling sample have a relatively obvious difference.
Referring to fig. 3 and 5, in some embodiments, identifying the first filling image to obtain a first identification result includes:
identifying a first filling image, and outputting a first identification result of the existence of the obstacle in the non-detection area in the first filling image under the condition that the obstacle is at least partially overlapped with the non-detection area or under the condition that the duty ratio of the overlapped area of the obstacle and the non-detection area in the non-detection area is larger than a first preset value;
Identifying the second filling image to obtain a second identification result, including:
and identifying the second filling image, and outputting a second identification result of the non-detection area in the second filling image when the obstacle is identified to be at least partially overlapped with the non-detection area or when the overlapping area of the obstacle and the non-detection area is identified to be larger than a second preset value in the non-detection area.
Specifically, the present application further provides an alternative implementation manner, in which the step 103 of identifying the first filling image, and the process of obtaining the first identification result may be specifically expressed as follows: recognizing the first filling image, and under the condition that at least partial overlapping area exists between the obstacle and the non-detection area or under the condition that the ratio of the overlapping area of the obstacle and the non-detection area is larger than a first preset value, indicating that the obstacle exists in the non-detection area and even the area of the obstacle occupying the non-detection area is larger, outputting a first recognition result of the obstacle existing in the non-detection area in the first filling image, and further outputting a first alarm prompt to a driver of the vehicle to know the situation of the environment outside the vehicle through the step 201, so that the driver can timely make a reaction of deceleration or braking, and the driving safety is improved; or, further image processing operations of forming the first filling image into the second filling image can be further executed through the step 104 and the step 105, so as to further verify whether the non-detection area in the image to be detected has an obstacle, and improve the accuracy of image identification, so that more accurate information outside the vehicle can be fed back to the driver of the vehicle, and the driving safety is improved.
The present application also provides an alternative implementation manner, in which the step 105 of identifying the second filling image, and the process of obtaining the second identification result may be specifically expressed as follows: under the condition that at least part of overlapping area exists between the obstacle and the non-detection area is identified, or under the condition that the ratio of the overlapping area of the obstacle and the non-detection area is identified to be larger than a second preset value, the situation that the obstacle exists in the non-detection area and even the area of the obstacle occupying the non-detection area is larger is indicated, at this moment, a second identification result of the existence of the obstacle in the non-detection area in the second filling image can be output, so that a second alarm prompt is further output through step 202 to be known to a driver of the vehicle, the driver can pay attention to the environment outside the vehicle, a reaction of deceleration or braking can be timely made, and the driving safety is improved.
It should be noted that, the specific values of the first preset value and the second preset value are not specifically limited, and the sizes of the first preset value and the second preset value can be correspondingly adjusted according to actual design requirements; the values of the first preset value and the second preset value may be the same or different, and the application is not limited in particular.
It is also necessary to supplement that the image collected by the image collecting device provided by the application can include one or more alarm areas, when the vehicle or person and other obstacles are identified, and the identified obstacles are partially or completely in the alarm areas, an alarm prompt is transmitted to tell the driver to decelerate or even brake, so that the harmful effects are avoided. The image outside the warning area is most of the time necessary for recognition, for example, a pedestrian, only the foot falls in the warning area, and the recognition rate is much lower if only the foot image in the warning area is taken at this time, so that the image recognition can be that whether the recognized obstacle falls in the warning area fully or partially after the whole image is recognized, so as to improve the recognition effect.
Fig. 8 is another flowchart of the image recognition method provided in the embodiment of the present application, please refer to fig. 8 in conjunction with fig. 4 and 5, it should be noted that, in fig. 8, a judgment frame is not shown between step 103 and step 201, and referring to fig. 4 and 5, only when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has an obstacle, step 201 outputs a first alarm prompt, and fig. 8 does not show the judgment frame only for layout reasons, and does not represent that step 201 is executed when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has no obstacle, as shown in fig. 5, step 201 is not executed when the first recognition result obtained in step 103 is that the non-detection area in the first filling image has no obstacle, and step 104 is executed directly. In some embodiments, the external image to be detected further includes a pattern area contiguous with the non-detection area, and the pattern area does not overlap with the non-detection area;
The image recognition method further includes, before filling the non-detection region in the external image to be detected with the first filling sample:
step 501, acquiring a plurality of external images of a vehicle acquired by an image acquisition device, and extracting pattern areas in the plurality of external images;
step 502, extracting pattern samples and/or color samples based on pattern areas of the plurality of external images, wherein the pattern samples are patterns with occurrence probability larger than a preset probability in the pattern areas of the plurality of external images, and the color samples are colors with the duty ratio larger than a set fourth threshold value in the pattern areas of the plurality of external images compared with all the pattern areas;
in step 503, a first filling sample is determined based on the pattern sample or the color sample, the pattern of the first filling sample and the pattern of the pattern sample are different, and the color of the first filling sample and the color of the color sample are different.
Specifically, the present application further provides an alternative embodiment, before the step 102 of filling the non-detection area in the external image to be detected with the first filling sample is performed, the method may further include steps 501-503, where the step 501 is used to acquire a plurality of external images of the same location area of the same vehicle acquired by the same image acquisition device, and extract a pattern area in the plurality of external images; step 502 is further executed, wherein a pattern and a color pattern are extracted based on the obtained pattern areas of the plurality of external images, wherein the pattern is a pattern with the occurrence probability larger than the preset probability in the pattern areas of the plurality of external images, so as to know what the picture with the largest occurrence probability in the pattern areas of the vehicle is, and the color is a color with the largest occurrence probability in the pattern areas of the vehicle compared with the color with the proportion of all the pattern areas larger than the set fourth threshold value in the pattern areas of the plurality of external images; further, step 503 may be executed to determine the first filling sample based on the pattern sample and the color sample, where the pattern of the first filling sample is different from the pattern of the pattern sample, or the color of the first filling sample is different from the color of the color sample, so that the probability that the pattern of the first filling sample is different from the pattern of the pattern area or the color is different can be improved, thereby further improving the accuracy of the first recognition result, reducing the execution frequency of step 104 and step 105, and being beneficial to further improving the efficiency of image recognition under the condition of guaranteeing the driving safety of the vehicle.
The specific values of the preset probability and the fourth threshold are not particularly limited, and the sizes of the preset probability and the fourth threshold can be correspondingly adjusted according to actual design requirements, so long as the accuracy of image recognition can be ensured.
The application provides an alternative implementation mode, wherein the pattern of the first filling sample can be black and white; alternatively, the color of the first fill sample may be selected to be positive blue.
It should be noted that, since the pattern area of the external images of the plurality of vehicles may be mainly composed of patterns or may be mainly composed of colors (solid colors), the present application may alternatively extract pattern samples and/or color samples based on the pattern area of the plurality of external images, for example, when the extracted content includes both pattern samples and color samples, if the extracted pattern area is mainly composed of pattern samples, the first filling sample may be adjusted according to the pattern sample of the pattern area, and if the extracted pattern area is mainly composed of color samples, the first filling sample may be adjusted according to the color sample of the pattern area, wherein the pattern samples or the color samples may be mainly composed of the patterns or the color samples may be compared according to the occupation number or the occupation ratio of the relevant images. In addition, it is also necessary to determine the first filling pattern on the basis of the pattern and the color pattern together, if desired, i.e. the first filling pattern is selected taking into account both the pattern of the pattern area and the color of the pattern area.
Fig. 9 is a schematic diagram of an image recognition apparatus according to an embodiment of the present application, please refer to fig. 9 in conjunction with fig. 2-8, and further provides an image recognition apparatus 200 based on the same inventive concept, the apparatus includes:
an image acquisition module 91, configured to acquire an external image to be detected of the vehicle, where the external image to be detected includes at least a part of a body image of the vehicle;
an image filling module 92 for filling a non-detection region in the external image to be detected with a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where at least part of the vehicle body is located;
the image recognition and result output module 93 is configured to recognize the first filling image, and obtain a first recognition result, where the first recognition result includes: the non-detection area in the first filling image has an obstacle, or the non-detection area in the first filling image has no obstacle;
wherein the first fill pattern comprises a pattern or color.
Specifically, the image recognition device 200 provided by the present application may at least include an image acquisition module 91, an image filling module 92, and an image recognition and result output module 93, where the image acquisition module 91 may be at least used to acquire an external image to be detected of a vehicle, where the external image to be detected may be acquired by any image acquisition device, and the external image to be detected may be a picture directly captured by the image acquisition device or a frame of picture in a video stream captured by the image acquisition device, which is not limited in this application. It should be noted that, the image capturing device for capturing the external image to be detected of the vehicle may be fixed on the external surface of the vehicle, and the image captured by the capturing device may include the external body of the vehicle, for example, the image capturing device may be placed in a rearview mirror or mounted around the rearview mirror, but this is only two alternative embodiments provided by the present application, and the present application is not limited thereto, as long as the mounted image capturing device can capture the external image to be detected of the vehicle. Further, the present application provides an alternative embodiment, in which at least part of the body image of the vehicle is included in the external image to be detected of the vehicle obtained by the image acquisition module 91, for example, at least part of the left side body image of the vehicle is included in the external image to be detected, or at least part of the right side body image of the vehicle is included in the external image to be detected, and of course, at least part of the left side body image and at least part of the right side body image may be included in the external image to be detected at the same time if required.
The image filling module 92 is at least used for further processing the external image to be detected, and the present application provides an alternative implementation manner, in which a non-detection area in the external image to be detected, that is, at least a partial area outside the alarm area is defined, and then the non-detection area in the external image to be detected is filled with a first filling sample, so that the picture of the defined non-detection area can be covered by a specific first filling sample, thereby forming a first filling image. The non-detection area in the external image to be detected is determined based on the target image, wherein the target image is an external image of the vehicle acquired in the same direction of the acquired image through the same image acquisition device before the external image to be detected is acquired, and the non-detection area comprises at least part of the image area where the vehicle body is located. Because the non-detection area comprises at least part of the image area where the vehicle body is located, after the image filling module 92 fills the non-detection area in the external image to be detected by using the first filling sample, the method is equivalent to filling at least part of the vehicle body in the external image to be detected by using the first filling sample, namely covering at least part of the obstacles such as the shadows, the car shadows, the tree shadows and the like reflected by the vehicle body and the patterns carried by the vehicle body by using the first filling sample, so that the influence of the shadows, the car shadows, the tree shadows and the like reflected by the vehicle body and the patterns carried by the vehicle body on the image recognition accuracy can be avoided, the effect of image recognition is improved, the influence of false recognition information on the driving safety of a user can be avoided, and the user experience is improved. When the non-detection area is defined to comprise all the car body parts in the external image to be detected, the first filling sample can completely cover the obstacles such as the shadow, the car shadow and the tree shadow reflected by the car body in the external image to be detected and the patterns carried by the car body, so that the image recognition accuracy is improved more favorably. It should be further added that the present application does not specifically limit the style, color, etc. of the first filling sample, where the first filling sample includes patterns or colors, and specific patterns, lines, colors, etc. can be selected accordingly according to the needs.
The image recognition and result output module 93 is configured to at least recognize a first filled image in which the non-detection area is filled with the first filling sample, so as to obtain a first recognition result, where the first recognition result includes at least two types, the first type is that the non-detection area in the first filled image has an obstacle, and the second type is that the non-detection area in the first filled image has no obstacle.
According to the image recognition device 200 provided by the application, the image acquisition module 91 is used for acquiring the external image to be detected of the vehicle, the image filling module 92 is used for defining that the non-detection area in the external image to be detected comprises at least part of the vehicle body, and then the non-detection area in the external image to be detected is filled with the first filling sample to form the first filling image, so that at least part of the vehicle body in the first filling image is filled with the first filling sample, the non-detection area recognized by the image recognition and result output module 93 in the image recognition process is of a specific pattern or color, the influence of the reflective image and the vehicle body pattern existing in at least part of the vehicle body on the recognition accuracy is avoided, and the accuracy of image recognition is improved.
In an embodiment, when the first recognition result output by the image recognition and output module is that the non-detection area in the first filling image has an obstacle, the method further includes: the image filling module 92 may further fill non-detection areas in the first filled image with a second filling sample to form a second filled image; the image recognition and result output module 93 may further recognize the second filling image to obtain a second recognition result, where the second recognition result includes: the non-detection area in the second filling image has an obstacle, or the non-detection area in the second filling image has no obstacle; wherein the second fill pattern is different from the first fill pattern, the second fill pattern comprising a pattern or a color. Reference is made in particular to fig. 3, and to the description above in relation to fig. 3.
In one embodiment, at least one of the following is also included: the image recognition and result output module 93 may output a first alert prompt before filling the non-detection region in the first filled image with the second filling sample; the image recognition and result output module 93 may output a second alarm prompt if the second recognition result is that the non-detection area in the second filling image has an obstacle; the image recognition and result output module 93 may output a false alarm prompt if the second recognition result is that the non-detection area in the second filling image has no obstacle. Reference is made in particular to fig. 4, 5, and the description above with respect to fig. 4, 5.
In an embodiment, the external image to be detected further includes a pattern area connected to the non-detection area, and the pattern area does not overlap with the non-detection area; before the image filling module 92 fills the non-detection region in the external image to be detected with the first filling sample, it further includes: acquiring pattern or color information of the pattern area through an image acquisition module 91; determining a first fill pattern based on pattern or color information of the pattern area; the approximation degree of the pattern of the first filling sample and the pattern of the pattern area is lower than a set first threshold value, or the difference between the color of the first filling sample and the color of the main component of the color of the pattern area is larger than a set second threshold value. Reference is made specifically to fig. 6 in conjunction with fig. 4 and 5, and the description above with respect to fig. 5 and 6.
In one embodiment, before the image filling module 92 fills the non-detection region in the first filled image with the second filling sample, it further comprises: acquiring pattern or color information of the first filling sample through an image acquisition module 91; determining a second filling sample based on the first filling sample and the pattern or color information of the pattern area; the similarity of the patterns of the second filling sample and the patterns of the first filling sample and the patterns of the pattern areas are smaller than a set third threshold value, or the colors of the second filling sample and the first filling sample are complementary colors and are different from the colors of the main components of the colors of the pattern areas. Reference is made specifically to fig. 7 in conjunction with fig. 4 and 5, and the description above with respect to fig. 5 and 7.
In one embodiment, when the image recognition and result output module 93 recognizes the first filling image to obtain the first recognition result, the specific steps may include: and identifying the first filling image, and outputting a first identification result of the existence of the obstacle in the non-detection area in the first filling image under the condition that the obstacle is at least partially overlapped with the non-detection area or under the condition that the ratio of the overlapping area of the obstacle and the non-detection area in the non-detection area is larger than a first preset value. When the image recognition and result output module 93 recognizes the second filling image, and obtains the second recognition result, the steps specifically performed may include: and identifying the second filling image, and outputting a second identification result of the non-detection area in the second filling image when the obstacle is identified to be at least partially overlapped with the non-detection area or when the overlapping area of the obstacle and the non-detection area is identified to be larger than a second preset value in the non-detection area.
In an embodiment, when the external image to be detected further includes a pattern area connected with the non-detection area, and the pattern area does not overlap with the non-detection area; before the image filling module 92 fills the non-detection area in the external image to be detected with the first filling sample, the following steps are further performed by the image acquisition module 91: acquiring a plurality of external images of the vehicle acquired by the image acquisition equipment, and extracting pattern areas in the plurality of external images; extracting pattern samples and/or color samples based on pattern areas of the plurality of external images, wherein the pattern samples are patterns with occurrence probability larger than preset probability in the pattern areas of the plurality of external images, and the color samples are colors with the duty ratio larger than a set fourth threshold value in the pattern areas of the plurality of external images compared with all the pattern areas; the first filling sample is determined based on the pattern sample or the color sample, the pattern of the first filling sample is different from the pattern of the pattern sample, and the color of the first filling sample is different from the color of the color sample. Reference is made specifically to fig. 8 in conjunction with fig. 4 and 5, and the description above with respect to fig. 5 and 8.
Fig. 10 is an internal structure diagram of a computer device according to an embodiment of the present application, and based on the same inventive concept, the present application further provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the foregoing image recognition method when executing the computer program, where the image recognition method is any one of the image recognition methods mentioned in the embodiments of the present application, and related embodiments may be referred to above.
In one embodiment, a computer device is provided, which may be a server or a terminal device, and the internal structure of the computer device may be as shown in fig. 10. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a transmitter, a receiver. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing the acquired external images to be detected. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements any of the methods of implementing dynamic windows.
It will be appreciated by those skilled in the art that the structure shown in FIG. 10 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
Based on the same inventive concept, the present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the aforementioned image recognition method, which is any one of the image recognition methods mentioned in the embodiments of the present application, and related embodiments can be seen from the foregoing.
Based on the same inventive concept, the present application also provides a computer program product, comprising a computer program, which when executed by a processor, implements the aforementioned image recognition method, where the image recognition method is any one of the image recognition methods mentioned in the embodiments of the present application, and related embodiments may be referred to above.
It should be noted that, the data (including, but not limited to, data for analysis, stored data, displayed data, etc.) related to the present application are all information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. An image recognition method, the method comprising:
acquiring an external image to be detected of a vehicle, wherein the external image to be detected comprises at least part of a vehicle body image of the vehicle;
filling a non-detection area in the external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where the at least part of the vehicle body is located;
Identifying the first filling image to obtain a first identification result, wherein the first identification result comprises: an obstacle exists in the non-detection area in the first filled image, or the non-detection area in the first filled image is free of an obstacle;
wherein the first fill pattern comprises a pattern or a color.
2. The method for recognizing an image according to claim 1, wherein,
the image recognition method further includes, when the first recognition result is that the non-detection area in the first filling image has the obstacle, the steps of:
filling the non-detection area in the first filling image with a second filling sample to form a second filling image;
identifying the second filling image to obtain a second identification result, wherein the second identification result comprises: an obstacle exists in the non-detection area in the second filled image, or the non-detection area in the second filled image is free of an obstacle;
wherein the second fill pattern is different from the first fill pattern, the second fill pattern comprising a pattern or a color.
3. The image recognition method of claim 2, further comprising at least one of:
Outputting a first alert prompt prior to said filling of said non-detection region in said first fill image with a second fill sample;
outputting a second alarm prompt when the second recognition result is that the non-detection area in the second filling image has the obstacle;
and outputting a false alarm prompt when the second recognition result is that the non-detection area in the second filling image does not have the obstacle.
4. The method for recognizing an image according to claim 2, wherein,
the external image to be detected also comprises a pattern area connected with the non-detection area, and the pattern area is not overlapped with the non-detection area;
before the filling of the non-detection area in the external image to be detected with the first filling sample, the method further comprises:
acquiring pattern or color information of the pattern area;
determining the first fill pattern based on pattern or color information of the pattern area; the approximation degree of the pattern of the first filling sample and the pattern of the pattern area is lower than a set first threshold value, or the difference between the color of the first filling sample and the color of the main component of the color of the pattern area is larger than a set second threshold value.
5. The method for recognizing an image according to claim 4, wherein,
before the filling the non-detection region in the first filling image with the second filling sample, the method further comprises:
acquiring pattern or color information of the first filling sample;
determining the second filling sample based on the first filling sample and the pattern or color information of the pattern area; the approximations of the patterns of the second filling sample, the patterns of the first filling sample and the patterns of the pattern area are smaller than a set third threshold value, or the colors of the second filling sample and the first filling sample are complementary colors and are different from the colors of the main components of the colors of the pattern area.
6. The method for recognizing an image according to claim 2, wherein,
the step of identifying the first filling image to obtain a first identification result comprises the following steps:
identifying the first filling image, and outputting the first identification result of the obstacle existing in the non-detection area in the first filling image under the condition that the obstacle is at least partially overlapped with the non-detection area or under the condition that the overlapping area of the obstacle and the non-detection area is identified to be larger than a first preset value in the non-detection area;
The step of identifying the second filling image to obtain a second identification result comprises the following steps:
and identifying the second filling image, and outputting the second identification result of the obstacle existing in the non-detection area in the second filling image under the condition that the obstacle is at least partially overlapped with the non-detection area or the overlapping area of the obstacle and the non-detection area is identified to be larger than a second preset value.
7. The method for recognizing an image according to claim 2, wherein,
the external image to be detected also comprises a pattern area connected with the non-detection area, and the pattern area is not overlapped with the non-detection area;
before the filling of the non-detection area in the external image to be detected with the first filling sample, the image recognition method further includes:
acquiring a plurality of external images of the vehicle acquired by the image acquisition device, and extracting the pattern areas in the external images;
extracting a pattern and/or a color sample based on the pattern areas of the external images, wherein the pattern is a pattern with occurrence probability larger than a preset probability in the pattern areas of the external images, and the color sample is a color with the duty ratio larger than a set fourth threshold value in the pattern areas of the external images compared with all the pattern areas;
Determining the first filling sample based on the pattern sample or the color sample, wherein the pattern of the first filling sample is different from the pattern of the pattern sample, and the color of the first filling sample is different from the color of the color sample.
8. An image recognition apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an external image to be detected of a vehicle, wherein the external image to be detected comprises at least part of a vehicle body image of the vehicle;
the image filling module is used for filling the non-detection area in the external image to be detected by using a first filling sample to form a first filling image; the non-detection area is determined based on a target image, wherein the target image is an external image of the vehicle acquired by the same image acquisition equipment before the external image to be detected is acquired; the non-detection area comprises an image area where the at least part of the vehicle body is located;
the image recognition and result output module is used for recognizing the first filling image to obtain a first recognition result, and the first recognition result comprises: an obstacle exists in the non-detection area in the first filled image, or the non-detection area in the first filled image is free of an obstacle;
Wherein the first fill pattern comprises a pattern or a color.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202311029663.1A 2023-08-16 2023-08-16 Image recognition method, device, computer equipment and storage medium Active CN117011830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311029663.1A CN117011830B (en) 2023-08-16 2023-08-16 Image recognition method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311029663.1A CN117011830B (en) 2023-08-16 2023-08-16 Image recognition method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117011830A true CN117011830A (en) 2023-11-07
CN117011830B CN117011830B (en) 2024-04-26

Family

ID=88563429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311029663.1A Active CN117011830B (en) 2023-08-16 2023-08-16 Image recognition method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117011830B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117740186A (en) * 2024-02-21 2024-03-22 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method and device and computer equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016013887A (en) * 2014-07-01 2016-01-28 株式会社タダノ Obstacle notification system of crane vehicle
JP2017074870A (en) * 2015-10-15 2017-04-20 日立建機株式会社 Device for detecting obstacle around vehicle
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
JP2018101898A (en) * 2016-12-20 2018-06-28 株式会社デンソーテン Image processing apparatus and image processing method
WO2019177036A1 (en) * 2018-03-15 2019-09-19 株式会社小糸製作所 Vehicle imaging system
US20200285904A1 (en) * 2019-03-08 2020-09-10 Milan Gavrilovic Method for creating a collision detection training set including ego part exclusion
JP2023071487A (en) * 2021-11-11 2023-05-23 フォルシアクラリオン・エレクトロニクス株式会社 Display control device and display control method
CN116252712A (en) * 2021-12-01 2023-06-13 现代自动车株式会社 Driver assistance apparatus, vehicle, and method of controlling vehicle
CN116265277A (en) * 2021-12-10 2023-06-20 广州汽车集团股份有限公司 Vehicle starting control method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016013887A (en) * 2014-07-01 2016-01-28 株式会社タダノ Obstacle notification system of crane vehicle
JP2017074870A (en) * 2015-10-15 2017-04-20 日立建機株式会社 Device for detecting obstacle around vehicle
JP2018101898A (en) * 2016-12-20 2018-06-28 株式会社デンソーテン Image processing apparatus and image processing method
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
WO2019177036A1 (en) * 2018-03-15 2019-09-19 株式会社小糸製作所 Vehicle imaging system
US20200285904A1 (en) * 2019-03-08 2020-09-10 Milan Gavrilovic Method for creating a collision detection training set including ego part exclusion
CN113544021A (en) * 2019-03-08 2021-10-22 奥拉科产品有限责任公司 Method for creating a collision detection training set comprising exclusion from components
JP2023071487A (en) * 2021-11-11 2023-05-23 フォルシアクラリオン・エレクトロニクス株式会社 Display control device and display control method
CN116252712A (en) * 2021-12-01 2023-06-13 现代自动车株式会社 Driver assistance apparatus, vehicle, and method of controlling vehicle
CN116265277A (en) * 2021-12-10 2023-06-20 广州汽车集团股份有限公司 Vehicle starting control method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHAO X ET AL.: "Omni-Directional Obstacle Detection for Vehicles Based on Depth Camera", 《IEEE ACCESS》, 31 December 2020 (2020-12-31), pages 93733 - 93748 *
罗金玲;: "基于计算机视觉的前方车辆检测与测距系统设计", 电脑编程技巧与维护, no. 22, 18 November 2017 (2017-11-18), pages 89 - 90 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117740186A (en) * 2024-02-21 2024-03-22 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method and device and computer equipment
CN117740186B (en) * 2024-02-21 2024-05-10 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method and device and computer equipment

Also Published As

Publication number Publication date
CN117011830B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US10853673B2 (en) Brake light detection
US11840239B2 (en) Multiple exposure event determination
CN106647776B (en) Method and device for judging lane changing trend of vehicle and computer storage medium
US9052393B2 (en) Object recognition system having radar and camera input
CN111815959B (en) Vehicle violation detection method and device and computer readable storage medium
CN110781768A (en) Target object detection method and device, electronic device and medium
US20130077830A1 (en) Traffic sign detecting method and traffic sign detecting device
KR20210078530A (en) Lane property detection method, device, electronic device and readable storage medium
US20140205139A1 (en) Object recognition system implementing image data transformation
CN117011830B (en) Image recognition method, device, computer equipment and storage medium
US20190377082A1 (en) System and method for detecting a vehicle in night time
US20210117700A1 (en) Lane line attribute detection
CN115457358A (en) Image and point cloud fusion processing method and device and unmanned vehicle
CN114170585B (en) Dangerous driving behavior recognition method and device, electronic equipment and storage medium
CN110909674A (en) Traffic sign identification method, device, equipment and storage medium
CN111332306A (en) Traffic road perception auxiliary driving early warning device based on machine vision
CN111191482A (en) Brake lamp identification method and device and electronic equipment
CN113505860B (en) Screening method and device for blind area detection training set, server and storage medium
CN115880632A (en) Timeout stay detection method, monitoring device, computer-readable storage medium, and chip
CN115457486A (en) Two-stage-based truck detection method, electronic equipment and storage medium
CN112686136A (en) Object detection method, device and system
CN111332305A (en) Active early warning type traffic road perception auxiliary driving early warning system
US20230394843A1 (en) Method for identifying moving vehicles and in-vehicle device
US20230045706A1 (en) System for displaying attention to nearby vehicles and method for providing an alarm using the same
CN117636302A (en) Parking space judgment implementation method, system, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant