CN117496483B - Night image recognition method and system - Google Patents

Night image recognition method and system Download PDF

Info

Publication number
CN117496483B
CN117496483B CN202311522896.5A CN202311522896A CN117496483B CN 117496483 B CN117496483 B CN 117496483B CN 202311522896 A CN202311522896 A CN 202311522896A CN 117496483 B CN117496483 B CN 117496483B
Authority
CN
China
Prior art keywords
image
images
historical
night
duty ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311522896.5A
Other languages
Chinese (zh)
Other versions
CN117496483A (en
Inventor
丁赞
黄亮
黄经
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Senyun Intelligent Technology Co ltd
Original Assignee
Shenzhen Senyun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Senyun Intelligent Technology Co ltd filed Critical Shenzhen Senyun Intelligent Technology Co ltd
Priority to CN202311522896.5A priority Critical patent/CN117496483B/en
Publication of CN117496483A publication Critical patent/CN117496483A/en
Application granted granted Critical
Publication of CN117496483B publication Critical patent/CN117496483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Nonlinear Science (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a night image identification method and a night image identification system, wherein the method comprises the following steps: the server receives a first image and information of the first image, wherein the information of the first image comprises: collecting time and collecting coordinates; when the server identifies the first image and determines that the first image does not contain the vehicle image, and when the acquisition time is determined to belong to a night period, extracting a plurality of images with the same acquisition coordinates from the historical images, selecting a plurality of historical images with the same hour and minute characteristics as the acquisition time from the plurality of images, determining an overlapping area between the first distinguishing area and the second distinguishing area, calculating a first duty ratio of the overlapping area in the first distinguishing area and a second duty ratio of the overlapping area and the second distinguishing area, calculating an average duty ratio of the first duty ratio and the second duty ratio, and determining that the first image contains temporary obstacles if the average duty ratio is larger than a first threshold value.

Description

Night image recognition method and system
Technical Field
The application belongs to the technical field of images, and particularly relates to a night image recognition method and system.
Background
The recognition of the night image has important significance to various fields, such as the field of vehicle navigation, the field of automatic driving, the field of AGV and the like, but the recognition of the night image is poor in light, so that the existing night image recognition cannot accurately judge objects, particularly obstacles.
Disclosure of Invention
The application provides a night image recognition method and a system, which can accurately judge night images, particularly obstacle accuracy, and improve night recognition accuracy.
In a first aspect, the present application provides a night image recognition method, the method comprising the steps of:
the server receives a first image and information of the first image, wherein the information of the first image comprises: collecting time and collecting coordinates;
When the server identifies the first image and determines that the first image does not contain the vehicle image, and when the acquisition time is determined to belong to the night period, extracting a plurality of images with the same acquisition coordinates from the historical images, selecting a plurality of historical images with the same hour and minute characteristics as the acquisition time from the plurality of images, and filtering the historical images containing the vehicle image in the plurality of historical images to obtain n historical images;
The server extracts two historical images from the n historical images, compares a first historical image of the two historical images with the first image to determine a first distinguishing area different from the first historical image in the first image, compares a second historical image of the two historical images with the first image to determine a second distinguishing area different from the second historical image in the first image, determines an overlapping area between the first distinguishing area and the second distinguishing area, calculates a first duty ratio of the overlapping area in the first distinguishing area and a second duty ratio of the overlapping area and the second distinguishing area, calculates an average duty ratio of the first duty ratio and the second duty ratio to obtain an average duty ratio, and determines that the first image contains temporary obstacles if the average duty ratio is larger than a first threshold value.
In a second aspect, there is provided a night time image recognition system, the system comprising:
The communication unit is used for receiving a first image and information of the first image, wherein the information of the first image comprises: collecting time and collecting coordinates;
The processing unit is used for identifying the first image, extracting a plurality of images with the same acquisition coordinates from the historical images when the first image is determined to not contain the vehicle image and the acquisition time is determined to belong to the night period, selecting a plurality of historical images with the same hour and minute characteristics as the acquisition time from the plurality of images, and filtering the historical images containing the vehicle image from the plurality of historical images to obtain n historical images;
The comparison determining unit is used for extracting two historical images from the n historical images, comparing a first historical image of the two historical images with the first image to determine a first distinguishing area different from the first historical image in the first image, comparing a second historical image of the two historical images with the first image to determine a second distinguishing area different from the second historical image in the first image, determining an overlapping area between the first distinguishing area and the second distinguishing area, calculating a first duty ratio of the overlapping area in the first distinguishing area and a second duty ratio of the overlapping area and the second distinguishing area, calculating an average duty ratio of the first duty ratio and the second duty ratio, and determining that the first image contains temporary obstacles if the average duty ratio is larger than a first threshold value.
In a third aspect, the present application provides a computer storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the steps described in the first aspect of the application.
The embodiment of the application has the following beneficial effects:
The technical scheme provided by the application is that the first image and the information of the first image sent by the vehicle-mounted terminal are received, wherein the information of the first image comprises: collecting time and collecting coordinates; when the server identifies the first image and determines that the first image does not contain the vehicle image, and when the acquisition time is determined to belong to the night period, extracting a plurality of images with the same acquisition coordinates from the historical images, selecting a plurality of historical images with the same hour and minute characteristics as the acquisition time from the plurality of images, and filtering the historical images containing the vehicle image in the plurality of historical images to obtain n historical images; the server extracts any two history images from the n history images, compares a first history image of the any two history images with the first image to determine a first distinguishing area different from the first history image in the first image, compares a second history image of the any two history images with the first image to determine a second distinguishing area different from the second history image in the first image, determines an overlapping area between the first distinguishing area and the second distinguishing area, calculates a first duty ratio of the overlapping area in the first distinguishing area and a second duty ratio of the overlapping area and the second distinguishing area, calculates an average value of the first duty ratio and the second duty ratio to obtain an average duty ratio, and if the average duty ratio is larger than a first threshold value, determines that the first image contains a temporary obstacle, and sends a reminding message to the vehicle-mounted terminal. This enables the identification of temporary obstacles for which the position it occupies in the image is often random, i.e. for images taken at the same time at the same coordinate position, the images taken should be substantially identical irrespective of the effect of the vehicle on the image, and if there are different places, and the history of randomly selecting two similar conditions is compared with the latter to determine that dissimilar places are completely identical, and the inconsistent positions have overlapping portions and a ratio greater than a certain value, it can be determined that the first image acquired has a temporary obstacle, because for an obstacle on the road, if the volume is small, the ratio is not greater than a certain value, and if so, it is already an obstacle, whatever the obstacle is, it has a certain effect on driving.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a system architecture of a server according to an embodiment of the present application;
Fig. 2 is a schematic flow chart of a night image recognition method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a difference matrix according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a night image recognition system according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, system, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following description will first be made of the relevant terms that the present application relates to.
Infrared image: the infrared image is an image formed by the infrared remote sensor receiving the infrared rays reflected by the ground object or emitted by the infrared remote sensor.
Visible light image: generally refers to light that can be resolved by the human eye. Visible light images are images that are recognizable to the human eye.
Image shading: when the light of one light source cannot reach the surface of one object due to the blocking of other objects, then the object is in shadow; shadows can make a scene look much more realistic and can allow a viewer to obtain a spatial positional relationship between objects.
Thermal imaging: the thermal imaging technology refers to that an infrared detector and an optical imaging objective lens are utilized to receive infrared radiation energy distribution patterns of a detected object and reflect the infrared radiation energy distribution patterns onto a photosensitive element of the infrared detector, so that an infrared thermal image is obtained.
Illuminance, i.e., illumination intensity, in lux, is an amount used to indicate the intensity of illumination and the degree to which the surface area of an object is illuminated.
The following describes the system architecture of the server according to the embodiments of the present application.
The present application further provides a server 10 and a plurality of vehicle-mounted terminals 20, as shown in fig. 1, where the server 10 is connected to the plurality of vehicle-mounted terminals 20 through a wireless network, and the vehicle-mounted terminals include at least one processor (processor) 11 and a memory (memory) 12, and may further include a communication interface (Communications Interface) 14, a plurality of cameras 15, and a bus 13. The processor 11, the memory 12, the cameras 15 and the communication interface 14 can communicate with each other through the bus 13. The communication interface 14 may transmit information, and the communication interface 14 may have a wireless communication function, and the wireless communication function may be a short-range wireless communication function or a long-range communication function (for example, LTE or NR system). The processor 11 may invoke logic instructions in the memory 12 to perform or support the method in embodiments of the present application.
Further, the logic instructions in the memory 12 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 12, as a computer readable storage medium, may be configured to store a software program, a computer executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 11 performs functional applications as well as data processing, i.e. implements or supports the methods of embodiments of the present application, by running software programs, instructions or modules stored in the memory 12.
Each vehicle-mounted terminal is provided with a plurality of cameras 15, the plurality of cameras 15 can be arranged at different positions of the vehicle according to manufacturers of different vehicle-mounted terminals, and the plurality of cameras 15 arranged by each vehicle-mounted terminal can be different. In addition, the plurality of vehicle-mounted terminals are all vehicle-mounted terminals capable of moving rapidly, so that the plurality of vehicle-mounted terminals can be temporarily built or configured according to different application scenes for identifying night images, and the application is not limited to the positional relationship or the communication relationship among the plurality of vehicle-mounted terminals.
The central server 10 may be a single server device, and of course, in an optional application scenario, the central server 10 may be a server cluster, and the server cluster may be a distributed server or a cloud server, etc., and the connection relationship or the division of the functions between the central servers are not limited in the present application.
The following describes the main technical application scenarios of the embodiment of the present application:
The night image recognition has important significance for image recognition, particularly has many applications in the field of flight and vehicle-mounted field, and because the night light is weak, the conventional image recognition mainly aims at recognizing a scene of visible light (namely, the daytime illumination condition is good), and when the night recognition is performed, the error rate of the image recognition method aiming at the visible light is high, so that the method is not applicable. The main application scene of the night recognition is the vehicle-mounted field, namely the night image recognition applied to the vehicle-mounted field, and specifically, the night image recognition is mainly applied to the recognition of temporary road obstacles. For vehicle driving safety, the influence of a dead-ended obstacle on the road, especially on a highway or a expressway (a road with a general speed limit of more than 80 km/h), is very large, and for a temporary obstacle, especially a truck or other vehicles, losing an object (such as goods or garbage) on the road when driving, if the temporary obstacle has a small influence on the vehicle safety during daytime due to better illumination conditions and visual conditions, but for night time, the road is illuminated by a street lamp generally at night due to limited illumination conditions, but the illumination brightness is far worse than during daytime, so that a vehicle driver can learn about the obstacle when the temporary obstacle is very close, and the vehicle driving speed is relatively high at night (generally at a high speed of more than 90km/h, and the time for avoiding the temporary obstacle is very short for the driver, so that the fast-driving vehicle can collide with the temporary obstacle to generate a safety hazard or even a safety accident, and therefore, the recognition of the temporary obstacle is particularly important for the night time.
The night image recognition according to the embodiment of the application mainly recognizes temporary obstacles, and is not applicable to fixed obstacles such as roadblocks, guardrails and the like.
The specific method is described in detail below.
Referring to fig. 2, the present application further provides a night image recognition method, and fig. 2 is a flow chart of the night image recognition method provided by the present application, where the method may be executed under the system architecture of the server shown in fig. 1, specifically, the method shown in fig. 2 may be executed by a central server and a vehicle-mounted terminal under the system architecture of the server, and the method shown in fig. 2 includes the following steps:
Step S201, the server receives a first image and information of the first image, where the information of the first image includes: collecting time and collecting coordinates;
for example, the acquisition time may specifically include: month, day, time, and four grades, although in the alternative, five grades may be included. The coordinates may be GPS coordinates or beidou coordinates.
The first image may be an image acquired by a visible light camera of the vehicle-mounted terminal, and of course, in an optional application scenario, the first image may also be an image acquired by an infrared camera of the vehicle-mounted terminal. The first image collected in the embodiment of the application may be a first image collected by a camera of a vehicle-mounted terminal disposed on an inner rear view mirror of a vehicle, where the inner rear view mirror of the vehicle is generally disposed on top of a front windshield of the vehicle and is located at a center line position of the windshield.
Step S202, when the server identifies the first image and determines that the first image does not contain the vehicle image, and when the acquisition time is determined to belong to a night period, extracting a plurality of images with the same acquisition coordinates from the historical images, selecting a plurality of historical images with the same characteristics as the hours and minutes of the acquisition time from the plurality of images, and filtering the historical images containing the vehicle image from the plurality of historical images to obtain n historical images;
for example, the history image may be an image collected by a camera disposed on a rearview mirror in a vehicle, so as to avoid an angle difference generated when the camera is located at a different position of the vehicle, thereby affecting the subsequent recognition accuracy.
For example, the method for identifying the first image by the server to determine that the first image does not include the vehicle image may specifically include:
The server performs classification recognition on the first image to determine whether the first image includes a vehicle image, including but not limited to: support vector machines, recurrent neural networks, LSTM networks, etc., the present application is not limited to the above-described vehicle image recognition method.
For example, the above manner of determining that the acquisition period belongs to the night period may specifically include:
And determining a time zone corresponding to the acquisition coordinates according to the acquisition coordinates, determining a first night time period corresponding to the current coordinates according to the mapping relation between the time zone and the night time period, and determining that the acquisition time period belongs to the night time period if the first time consisting of the hours and the minutes of the acquisition time belongs to the first night time period.
For the night period, the time zones are very different, for example, the time zones of the east and west are very different, and the time zones are related to the acquired coordinates, so that different mapping relationships of the night period need to be set for different time zones, for example, a province located at the east, which may be 18:00 belongs to night time period but to western province B, which may be at 20:00 also belongs to non-night time period, and the night time period is needed to be distinguished mainly for facilitating the subsequent recognition of the obstacle, because the sunlight is good and the light is good during the daytime, and the driver can see the temporary obstacle clearly through eyes of the driver. The above time zone is the same as the general time zone, for example, the province a belongs to the eastern eight zone, etc.
By way of example, the selecting a plurality of historical images from the plurality of images that have the same hours and minutes characteristic of the acquisition time may specifically include:
The method comprises the steps of partitioning a plurality of historical images according to hours to obtain 24 partitioned historical images, obtaining an hour value h1 and a minute value m1 of acquisition time, extracting an h1 area historical image corresponding to h1 from the 24 partitioned historical images, selecting a historical image with the minute of m1 from the h1 area historical images, and determining the historical images as a plurality of images.
For example, h1=18; m1=30, then the 18 th partition history image is selected from the 24 partition history images, and the history image with the number of minutes of 30 is selected from the 18 partition history images and is determined as a plurality of images.
Since the effect of illumination on the alignment is considered below, it is sufficient to select the same hour and minute granularity, and the second is ignored, and since the effect of illumination affecting one same place is mainly determined for the day, the effect of the day, month, and year is not considered.
For example, the filtering the history images including the vehicle image from the plurality of history images to obtain n history images may specifically include: and deleting the history images containing the vehicle images from the plurality of history images to obtain n history images.
Step S203, the server extracts two historical images from the n historical images, compares a first historical image of the two historical images with the first image to determine a first distinguishing area different from the first historical image in the first image, compares a second historical image of the two historical images with the first image to determine a second distinguishing area different from the second historical image in the first image, determines an overlapping area between the first distinguishing area and the second distinguishing area, calculates a first duty ratio of the overlapping area in the first distinguishing area and a second duty ratio of the overlapping area and the second distinguishing area, calculates an average duty ratio of the first duty ratio and the second duty ratio to obtain an average duty ratio, and if the average duty ratio is larger than a first threshold value, determines that the first image contains a temporary obstacle, and sends a reminding message to the vehicle-mounted terminal.
The technical scheme provided by the application is that the first image and the information of the first image sent by the vehicle-mounted terminal are received, wherein the information of the first image comprises: collecting time and collecting coordinates; when the server identifies the first image and determines that the first image does not contain the vehicle image, and when the acquisition time is determined to belong to the night period, extracting a plurality of images with the same acquisition coordinates from the historical images, selecting a plurality of historical images with the same hour and minute characteristics as the acquisition time from the plurality of images, and filtering the historical images containing the vehicle image in the plurality of historical images to obtain n historical images; the server extracts any two history images from the n history images, compares a first history image of the any two history images with the first image to determine a first distinguishing area different from the first history image in the first image, compares a second history image of the any two history images with the first image to determine a second distinguishing area different from the second history image in the first image, determines an overlapping area between the first distinguishing area and the second distinguishing area, calculates a first duty ratio of the overlapping area in the first distinguishing area and a second duty ratio of the overlapping area and the second distinguishing area, calculates an average value of the first duty ratio and the second duty ratio to obtain an average duty ratio, and if the average duty ratio is larger than a first threshold value, determines that the first image contains a temporary obstacle, and sends a reminding message to the vehicle-mounted terminal. This enables the identification of temporary obstacles for which the position it occupies in the image is often random, i.e. for images taken at the same time at the same coordinate position, the images taken should be substantially identical irrespective of the effect of the vehicle on the image, and if there are different places, and the history of randomly selecting two similar conditions is compared with the latter to determine that dissimilar places are completely identical, and the inconsistent positions have overlapping portions and a ratio greater than a certain value, it can be determined that the first image acquired has a temporary obstacle, because for an obstacle on the road, if the volume is small, the ratio is not greater than a certain value, and if so, it is already an obstacle, whatever the obstacle is, it has a certain effect on driving.
For example, the extracting, by the server, two history images from the n history images may specifically include:
the server randomly extracts two historical images from the n historical images; namely, two historical images are randomly extracted. Of course, in an optional application scenario, the server acquires n historical images, extracts n illuminances in the n historical images, selects 2 historical images corresponding to two illuminances closest to the first illuminance of the first image from the n illuminances, and determines the 2 historical images as two historical images.
For example, the specific implementation manner of comparing the first historical image of the two historical images with the first image to determine the first distinguishing area different from the first historical image in the first image may include:
Extracting a pixel value (which may be R, G, B values or an average value of R, G, B values, or may be a gray value) of each pixel of the first historical image, inputting the pixel value of each pixel into a preset size matrix according to the position of each pixel to obtain a first historical pixel matrix, extracting the pixel value of each pixel of the first image, inputting the pixel value of each pixel into the preset size matrix according to the position of each pixel to obtain a first image pixel matrix, calculating the difference value between the first image pixel matrix and the first historical pixel matrix to obtain a difference value matrix, reserving a plurality of element values which do not belong to a threshold range in the difference value matrix to obtain a filter matrix, dividing adjacent and continuous elements in the filter matrix into a plurality of intervals, selecting the interval with the largest element number from the intervals to be determined as a region to be determined, and determining the region to be determined as a first difference region if the element number of the region to be determined is greater than a threshold value.
In the following, a practical example will be described, because the matrix of pixels is very large, and for reasons of space limitation, the difference matrix is taken as 5*5 for example, and it is assumed that the schematic diagram of the difference matrix 5*5 is shown in fig. 3, the range of the number 401 is the area to be determined, and for the filter matrix, the element value of 0 is removed.
For an alternative technical scenario, the search for the distinct region is mainly to look at regions with different pixels, and this region is to form a certain range, because if the formed range is too small, it is likely to be a noise region, and therefore a certain range is required, that is, the number of element values in the filtering matrix is enough, see fig. 3, which forms 2 regions, region 401 has 6 elements, and region 402 has 4 elements, so region 401 belongs to the pending region, and if the number threshold is 5, region 401 is determined to be the first distinct region. Of course, in practical applications, the number of the number thresholds may be large, typically 1000 or even more than 10000.
For example, the above-mentioned operation of comparing the second history image of the two history images with the first image to determine the second distinguishing area different from the second history image in the first image may refer to the first history image, which is not described herein again.
For example, the determining the overlapping area between the first distinguishing area and the second distinguishing area may specifically include:
Extracting row values and column values of each pixel point in the first image pixel point matrix in the first distinguishing region, extracting maximum values Hmax1 and minimum values Hmin1 of the row values in the first image pixel point matrix of all the pixel points in the first distinguishing region, extracting maximum values Wmax1 and minimum values Wmin1 of the column values, and extracting maximum values Hmax2 and minimum values Hmin2 of the row values in the second distinguishing region, and extracting maximum values Wmax2 and minimum values Wmin2 of the column values in the same way; determining a first row value interval [ Hmax1, hmin1 ] and a first column value interval [ Wmax1, wmin1 ] of a first image pixel point matrix; determining a second row value interval [ Hmax2, hmin2 ] and a second column value interval [ Wmax2, wmin2 ] of the second image pixel point matrix; if the first row value interval [ Hmax1, hmin1 ] overlaps the second row value interval [ Hmax2, hmin2 ], and the first column value interval [ Wmax1, wmin1 ] overlaps the second column value interval [ Wmax2, wmin2 ], the number of pixels in the overlapping region is the number x of the intervals in which the first row value interval [ Hmax1, hmin1 ] overlaps the second row value interval [ Hmax2, hmin2 ].
For example, the calculating the first duty ratio of the overlapping area in the first distinguishing area and the second duty ratio of the overlapping area and the second distinguishing area may specifically include:
First duty cycle = X/X1; wherein X1 is the total number of pixels of the first distinct region;
Second duty cycle = X/X2; wherein X2 is the total number of pixels of the second distinct region.
For example, the method may further include:
The server receives the positioning coordinates sent by other vehicle-mounted terminals, and if the positioning coordinates and the acquisition coordinates are in a preset range, a temporary obstacle reminding message is sent to the other terminals.
Namely, the reminding message to other vehicle-mounted terminals is detected, so that the driving safety is improved.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the mobile electronic device, in order to achieve the above-described functionality, comprises corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a night image recognition system according to the present application, where the system includes:
The communication unit 401 is configured to receive a first image and information of the first image, where the information of the first image includes: collecting time and collecting coordinates;
a processing unit 402, configured to identify the first image, and when it is determined that the first image does not include a vehicle image, extract a plurality of images with the same acquisition coordinates from the history images when it is determined that the acquisition time belongs to a night period, select a plurality of history images with the same characteristics as the hours and minutes of the acquisition time from the plurality of images, and filter the history images including the vehicle image from the plurality of history images to obtain n history images;
The comparison determining unit 403 is configured to extract two history images from the n history images, compare a first history image of the two history images with the first image to determine a first difference region different from the first history image in the first image, compare a second history image of the two history images with the first image to determine a second difference region different from the second history image in the first image, determine an overlapping region between the first difference region and the second difference region, calculate a first duty ratio of the overlapping region in the first difference region and a second duty ratio of the overlapping region and the second difference region, calculate an average duty ratio of the first duty ratio and the second duty ratio, and determine that the first image contains a temporary obstacle if the average duty ratio is greater than a first threshold.
By way of example only, the present invention is directed to a method of,
The processing unit 402 is specifically configured to determine, according to the collection coordinates, a time zone corresponding to the collection coordinates, determine, according to a mapping relationship between the time zone and the night time period, a first night time period corresponding to the current coordinates, and determine that the collection time period belongs to the night time period if a first time consisting of hours and minutes of the collection time belongs to the first night time period.
By way of example only, the present invention is directed to a method of,
The processing unit 402 is specifically configured to partition the plurality of history images by hours to obtain 24 partition history images, obtain an hour value h1 and a minute value m1 of the acquisition time, extract an h1 area history image corresponding to h1 from the 24 partition history images, select a history image with a minute of m1 from the h1 area history images, and determine the selected history image as the plurality of images.
By way of example only, the present invention is directed to a method of,
The comparison determining unit 403 is specifically configured to extract a pixel value of each pixel of the first historical image, input the pixel value of each pixel into the matrix according to a position of each pixel to obtain a first historical pixel matrix, extract the pixel value of each pixel of the first image, input the pixel value of each pixel into the matrix according to a position of each pixel to obtain a first image pixel matrix, calculate a difference value between the first image pixel matrix and the first historical pixel matrix to obtain a difference value matrix, reserve a plurality of element values in the difference value matrix, where the element value does not belong to a threshold range, obtain a filter matrix, divide adjacent and continuous elements in the filter matrix into a plurality of intervals, select an interval with the largest number of elements from the plurality of intervals, determine the interval as a to-be-determined area, and determine the to-be-determined threshold as the first difference area if the number of elements of the to-be-determined area is greater than a number threshold.
By way of example only, the present invention is directed to a method of,
The comparison determining unit 403 is specifically configured to extract a row value and a column value of each pixel point in the first image pixel point matrix in the first distinguishing area, extract a maximum value Hmax1 and a minimum value Hmin1 of the row values in the first image pixel point matrix, and extract a maximum value Wmax1 and a minimum value Wmin1 of the column values, and extract a maximum value Hmax2 and a minimum value Hmin2 of the row values, and extract a maximum value Wmax2 and a minimum value Wmin2 of the column values in the second distinguishing area in a similar manner; determining a first row value interval [ Hmax1, hmin1 ] and a first column value interval [ Wmax1, wmin1 ] of a first image pixel point matrix; determining a second row value interval [ Hmax2, hmin2 ] and a second column value interval [ Wmax2, wmin2 ] of the second image pixel point matrix; if the first row value interval [ Hmaxl, hmin1 ] overlaps the second row value interval [ Hmax2, hmin2 ] and the first column value interval [ Wmax1, wmin1 ] overlaps the second column value interval [ Wmax2, wmin2 ], the number of pixels in the overlapping region is the number x of the intervals in which the first row value interval [ Hmax1, hmin1 ] overlaps the second row value interval [ Hmax2, hmin2 ].
By way of example only, the present invention is directed to a method of,
An alignment determination unit 403, specifically for the first duty ratio=x/X1; wherein X1 is the total number of pixels of the first distinct region;
Second duty cycle = X/X2; wherein X2 is the total number of pixels of the second distinct region.
By way of example only, the present invention is directed to a method of,
And the communication unit is also used for sending a temporary obstacle reminding message to the vehicle-mounted terminal.
By way of example only, the present invention is directed to a method of,
The communication unit is also used for receiving the positioning coordinates sent by other vehicle-mounted terminals, and sending temporary obstacle reminding information to the other terminals if the positioning coordinates and the acquisition coordinates are in a preset range.
The embodiment of the application can divide the functional units of the electronic device according to the method example, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other manners. For example, the device embodiments described above are merely illustrative; for example, the division of the units is only one logic function division, and other division modes can be adopted in actual implementation; for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: u disk, removable hard disk, magnetic disk, optical disk, volatile memory or nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM, EPROM), an electrically erasable programmable ROM (electricallyEPROM, EEPROM), or a flash memory, among others. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of random access memory (random access memory, RAM) are available, such as static random access memory (STATIC RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (doubledata RATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINKDRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM). Etc. various media in which program code may be stored.
Although the present invention is disclosed above, the present invention is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the invention.

Claims (10)

1. A night time image recognition method, the method comprising the steps of:
the server receives a first image and information of the first image, wherein the information of the first image comprises: collecting time and collecting coordinates;
When the server identifies the first image and determines that the first image does not contain the vehicle image, and when the acquisition time is determined to belong to the night period, extracting a plurality of images with the same acquisition coordinates from the historical images, selecting a plurality of historical images with the same hour and minute characteristics as the acquisition time from the plurality of images, and filtering the historical images containing the vehicle image in the plurality of historical images to obtain n historical images;
The server extracts two historical images from the n historical images, compares a first historical image of the two historical images with the first image to determine a first distinguishing area different from the first historical image in the first image, compares a second historical image of the two historical images with the first image to determine a second distinguishing area different from the second historical image in the first image, determines an overlapping area between the first distinguishing area and the second distinguishing area, calculates a first duty ratio of the overlapping area in the first distinguishing area and a second duty ratio of the overlapping area and the second distinguishing area, calculates an average duty ratio of the first duty ratio and the second duty ratio to obtain an average duty ratio, and determines that the first image contains temporary obstacles if the average duty ratio is larger than a first threshold value.
2. The night image recognition method according to claim 1, wherein the determining that the acquisition period belongs to the night period specifically comprises:
And determining a time zone corresponding to the acquisition coordinates according to the acquisition coordinates, determining a first night time period corresponding to the current coordinates according to the mapping relation between the time zone and the night time period, and determining that the acquisition time period belongs to the night time period if the first time consisting of the hours and the minutes of the acquisition time belongs to the first night time period.
3. The night image recognition method according to claim 2, wherein selecting a plurality of history images having the same characteristics as the hours and minutes of the acquisition time from the plurality of images specifically comprises:
The method comprises the steps of partitioning a plurality of historical images according to hours to obtain 24 partitioned historical images, obtaining an hour value h1 and a minute value m1 of acquisition time, extracting an h1 area historical image corresponding to h1 from the 24 partitioned historical images, selecting a historical image with the minute of m1 from the h1 area historical images, and determining the historical images as a plurality of images.
4. The night image recognition method according to claim 1, wherein comparing the first history image of the two history images with the first image to determine a first distinguishing region different from the first history image in the first image comprises:
Extracting a pixel value of each pixel of a first historical image, inputting the pixel value of each pixel into a preset size matrix according to the position of each pixel to obtain a first historical pixel matrix, extracting the pixel value of each pixel of the first image, inputting the pixel value of each pixel into the preset size matrix according to the position of each pixel to obtain a first image pixel matrix, calculating the difference between the first image pixel matrix and the first historical pixel matrix to obtain a difference matrix, reserving a plurality of element values which do not belong to a threshold range in the difference matrix to obtain a filter matrix, dividing adjacent and continuous elements in the filter matrix into a plurality of intervals to obtain a plurality of intervals, selecting the interval with the largest element number from the plurality of intervals, determining the interval to be determined as a first distinguishing area if the element number of the area to be determined is larger than a quantity threshold value.
5. The night image recognition method of claim 4, wherein determining an overlap region between the first distinct region and the second distinct region comprises:
Extracting row values and column values of each pixel point in the first image pixel point matrix in the first distinguishing region, extracting maximum values Hmax1 and minimum values Hmin1 of the row values in the first image pixel point matrix of all the pixel points in the first distinguishing region, extracting maximum values Wmax1 and minimum values Wmin1 of the column values, and extracting maximum values Hmax2 and minimum values Hmin2 of the row values in the second distinguishing region, and extracting maximum values Wmax2 and minimum values Wmin2 of the column values in the same way; determining a first row value interval [ Hmax1, hmin1 ] and a first column value interval [ Wmax1, wmin1 ] of a first image pixel point matrix; determining a second row value interval [ Hmax2, hmin2 ] and a second column value interval [ Wmax2, wmin2 ] of the second image pixel point matrix; if the first row value interval [ Hmax1, hmin1 ] overlaps the second row value interval [ Hmax2, hmin2 ], and the first column value interval [ Wmax1, wmin1 ] overlaps the second column value interval [ Wmax2, wmin2 ], an overlapping region between the first distinguishing region and the second distinguishing region is determined, and the number of pixels in the overlapping region is the number x of the intervals in which the first row value interval [ Hmax1, hmin1 ] overlaps the second row value interval [ Hmax2, hmin2 ].
6. The night image recognition method of claim 5, wherein calculating a first duty ratio of the overlapping region in the first distinguishing region and a second duty ratio of the overlapping region and the second distinguishing region specifically comprises:
First duty cycle = X/X1; wherein X1 is the total number of pixels of the first distinct region;
Second duty cycle = X/X2; wherein X2 is the total number of pixels of the second distinct region.
7. The night image recognition method of claim 1, wherein the method further comprises:
And the server sends a temporary obstacle reminding message to the vehicle-mounted terminal.
8. The night image recognition method of claim 7, wherein the method further comprises:
The server receives the positioning coordinates sent by other vehicle-mounted terminals, and if the positioning coordinates and the acquisition coordinates are in a preset range, a temporary obstacle reminding message is sent to the other terminals.
9. A night time image recognition system, the system comprising:
The communication unit is used for receiving a first image and information of the first image, wherein the information of the first image comprises: collecting time and collecting coordinates;
The processing unit is used for identifying the first image, extracting a plurality of images with the same acquisition coordinates from the historical images when the first image is determined to not contain the vehicle image and the acquisition time is determined to belong to the night period, selecting a plurality of historical images with the same hour and minute characteristics as the acquisition time from the plurality of images, and filtering the historical images containing the vehicle image from the plurality of historical images to obtain n historical images;
The comparison determining unit is used for extracting two historical images from the n historical images, comparing a first historical image of the two historical images with the first image to determine a first distinguishing area different from the first historical image in the first image, comparing a second historical image of the two historical images with the first image to determine a second distinguishing area different from the second historical image in the first image, determining an overlapping area between the first distinguishing area and the second distinguishing area, calculating a first duty ratio of the overlapping area in the first distinguishing area and a second duty ratio of the overlapping area and the second distinguishing area, calculating an average duty ratio of the first duty ratio and the second duty ratio, and determining that the first image contains temporary obstacles if the average duty ratio is larger than a first threshold value.
10. The night image recognition system of claim 9, wherein,
The processing unit is specifically configured to determine, according to the acquired coordinates, a time zone corresponding to the acquired coordinates, determine, according to a mapping relationship between the time zone and a night time period, a first night time period corresponding to the current coordinates, and if a first time consisting of an hour and a minute of the acquired time belongs to the first night time period, determine that the acquired time period belongs to the night time period.
CN202311522896.5A 2023-11-15 2023-11-15 Night image recognition method and system Active CN117496483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311522896.5A CN117496483B (en) 2023-11-15 2023-11-15 Night image recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311522896.5A CN117496483B (en) 2023-11-15 2023-11-15 Night image recognition method and system

Publications (2)

Publication Number Publication Date
CN117496483A CN117496483A (en) 2024-02-02
CN117496483B true CN117496483B (en) 2024-05-31

Family

ID=89672382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311522896.5A Active CN117496483B (en) 2023-11-15 2023-11-15 Night image recognition method and system

Country Status (1)

Country Link
CN (1) CN117496483B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008059319A (en) * 2006-08-31 2008-03-13 Mitsubishi Electric Corp Object recognition device, and image object positioning device
KR20160093169A (en) * 2015-01-28 2016-08-08 한국기술교육대학교 산학협력단 Apparatus for sensing forward obstacle
CN111353339A (en) * 2018-12-21 2020-06-30 厦门歌乐电子企业有限公司 Object recognition device and method
CN115474441A (en) * 2021-03-24 2022-12-13 京东方科技集团股份有限公司 Obstacle detection method, apparatus, device, and computer storage medium
CN116092048A (en) * 2022-12-21 2023-05-09 北京和利时系统工程有限公司 Image recognition method and device suitable for vehicle-mounted equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11010622B2 (en) * 2018-11-02 2021-05-18 Toyota Research Institute, Inc. Infrastructure-free NLoS obstacle detection for autonomous cars

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008059319A (en) * 2006-08-31 2008-03-13 Mitsubishi Electric Corp Object recognition device, and image object positioning device
KR20160093169A (en) * 2015-01-28 2016-08-08 한국기술교육대학교 산학협력단 Apparatus for sensing forward obstacle
CN111353339A (en) * 2018-12-21 2020-06-30 厦门歌乐电子企业有限公司 Object recognition device and method
CN115474441A (en) * 2021-03-24 2022-12-13 京东方科技集团股份有限公司 Obstacle detection method, apparatus, device, and computer storage medium
CN116092048A (en) * 2022-12-21 2023-05-09 北京和利时系统工程有限公司 Image recognition method and device suitable for vehicle-mounted equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于毫米波雷达和机器视觉的夜间前方车辆检测;金立生;程蕾;成波;;汽车安全与节能学报;20160615(02);全文 *

Also Published As

Publication number Publication date
CN117496483A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN110148144B (en) Point cloud data segmentation method and device, storage medium and electronic device
Mu et al. Traffic light detection and recognition for autonomous vehicles
CN101656023B (en) Management method of indoor car park in video monitor mode
CN109416413A (en) Solar energy forecast
CN102902952A (en) Onboard environment recognition system
EP4089659A1 (en) Map updating method, apparatus and device
CN105608417A (en) Traffic signal lamp detection method and device
US10217240B2 (en) Method and system to determine distance to an object in an image
CN108052921B (en) Lane line detection method, device and terminal
Song et al. Automatic detection and classification of road, car, and pedestrian using binocular cameras in traffic scenes with a common framework
CN117496483B (en) Night image recognition method and system
CN107463886B (en) Double-flash identification and vehicle obstacle avoidance method and system
CN116824152A (en) Target detection method and device based on point cloud, readable storage medium and terminal
CN106991415A (en) Image processing method and device for vehicle-mounted fisheye camera
Wu et al. Road boundary-enhanced automatic background filtering for roadside LiDAR sensors
CN115311891B (en) Roadside and parking lot free parking space sharing method, system and storage medium
CN116430404A (en) Method and device for determining relative position, storage medium and electronic device
CN116246308A (en) Multi-target tracking early warning method and device based on visual recognition and terminal equipment
CN207115438U (en) Image processing apparatus for vehicle-mounted fisheye camera
CN113312403B (en) Map acquisition method and device, electronic equipment and storage medium
Rebut et al. Road obstacles detection using a self-adaptive stereo vision sensor: a contribution to the ARCOS French project
Bertozzi et al. A night vision module for the detection of distant pedestrians
CN116206483B (en) Parking position determining method, electronic device and computer readable storage medium
CN116612059B (en) Image processing method and device, electronic equipment and storage medium
CN118397522B (en) Decision analysis method, device, system and storage medium based on real-time analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant