CN115035481A - Image object distance fusion method, device, equipment and storage medium - Google Patents

Image object distance fusion method, device, equipment and storage medium Download PDF

Info

Publication number
CN115035481A
CN115035481A CN202210778505.5A CN202210778505A CN115035481A CN 115035481 A CN115035481 A CN 115035481A CN 202210778505 A CN202210778505 A CN 202210778505A CN 115035481 A CN115035481 A CN 115035481A
Authority
CN
China
Prior art keywords
target
distance
image
position point
preset position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210778505.5A
Other languages
Chinese (zh)
Inventor
邓传华
刘惠灵
曾毅
梅海鹏
曾远双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Heyuan Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Heyuan Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Heyuan Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202210778505.5A priority Critical patent/CN115035481A/en
Publication of CN115035481A publication Critical patent/CN115035481A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention discloses an image object distance fusion method, device, equipment and storage medium, comprising the following steps: acquiring distance measurement values between each target and a preset position point in a monitoring range in real time through a distance sensor; acquiring an environment image corresponding to the monitoring range according to a preset time interval, and identifying each target in the environment image to obtain an image identification result corresponding to each target; and determining the predicted distance between each target and the preset position point in the environment image according to the distance measurement value between each target and the preset position point and the image recognition result corresponding to each target. The technical scheme of the embodiment of the invention can reduce the error of the image object distance fusion result and improve the accuracy of the image object distance fusion result.

Description

Image object distance fusion method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to an image object distance fusion method, device, equipment and storage medium.
Background
Currently, many visualization services need to acquire a plurality of service-related images during the implementation process, and detect target information, such as distance, size, and the like, of each target included in the images. By detecting the target information included in the plurality of images, the plurality of images can be fused to obtain the image of the large scene, and the method has important significance for visualization business.
Conventionally, when detecting distance information of an object in an image, information such as a distance and a size of another object is generally estimated from information (for example, a position and a size) of a certain reference object in the image.
However, in the prior art, when the photographing environment changes, for example, in the environments of weather and strong light, the reality of the reference object information in the image is affected, so that the target information is deviated; secondly, in the prior art, the distance information of the target has errors due to factors such as the resolution and the brightness of the image.
Disclosure of Invention
The invention provides an image object distance fusion method, an image object distance fusion device, image object distance fusion equipment and a storage medium, which can reduce the error of an image object distance fusion result and improve the accuracy of the image object distance fusion result.
According to an aspect of the present invention, there is provided an image object distance fusion method, including:
acquiring distance measurement values between each target and a preset position point in a monitoring range in real time through a distance sensor;
acquiring an environment image corresponding to a monitoring range according to a preset time interval, and identifying each target in the environment image to obtain an image identification result corresponding to each target;
and determining the predicted distance between each target and the preset position point in the environment image according to the distance measurement value between each target and the preset position point and the image recognition result corresponding to each target.
According to another aspect of the present invention, there is provided an image object distance fusion apparatus, the apparatus comprising:
the distance measurement module is used for acquiring distance measurement values between each target and a preset position point in a monitoring range in real time through the distance sensor;
the target identification module is used for acquiring an environment image corresponding to the monitoring range according to a preset time interval, and identifying each target in the environment image to obtain an image identification result corresponding to each target;
and the prediction distance determining module is used for determining the prediction distance between each target and the preset position point in the environment image according to the distance measurement value between each target and the preset position point and the image recognition result corresponding to each target.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image object distance fusion method according to any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the image object distance fusion method according to any one of the embodiments of the present invention when the computer instructions are executed.
According to the technical scheme provided by the embodiment of the invention, the distance measurement value between each target and the preset position point in the monitoring range is obtained in real time through the distance sensor, the environment image corresponding to the monitoring range is collected according to the preset time interval, each target in the environment image is identified to obtain the image identification result corresponding to each target, and the prediction distance between each target and the preset position point in the environment image is determined according to the distance measurement value between each target and the preset position point and the image identification result corresponding to each target.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an image object distance fusion method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another image object distance fusion method provided in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of another method for image object distance fusion according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an image object distance fusion apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing the image object distance fusion method according to the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart of an image object distance fusion method according to an embodiment of the present invention, where the embodiment is applicable to a situation where a target distance in a monitored image is detected and the target distance is fused with the image, and the method may be executed by an image object distance fusion device, where the image object distance fusion device may be implemented in a form of hardware and/or software, and the image object distance fusion device may be configured in an electronic device (for example, a terminal or a server) with a data processing function. As shown in fig. 1, the method includes:
and 110, acquiring distance measurement values between each target and a preset position point in a monitoring range in real time through a distance sensor.
In this embodiment, the distance sensor is used to measure the distance between the object and the preset position point. Specifically, the distance sensor includes a laser distance measuring sensor, an ultrasonic distance measuring sensor, an infrared distance measuring sensor, a radar distance measuring sensor, and the like. The monitoring range may be preset for monitoring the range of people or objects entering or exiting. Taking the high-voltage wire as an example, the monitoring range may be a preset dangerous area around the high-voltage wire.
In this step, the distance measurement value between each target (including a person or an object, etc.) in the monitoring range and a preset position point may be obtained in real time through the distance sensor, and the preset position point may be a position point corresponding to the distance sensor.
And 120, acquiring an environment image corresponding to the monitoring range according to a preset time interval, and identifying each target in the environment image to obtain an image identification result corresponding to each target.
In this embodiment, while the measurement value of each target distance in the monitoring range is obtained, an image acquisition device (such as a camera) is further adopted to acquire an image (that is, an environmental image) in the monitoring range according to a preset time interval. The time interval may be 3s, and the specific value may be preset according to an actual situation, which is not limited in this embodiment.
In this step, after the environmental image is acquired, each target in the environmental image may be identified according to a pre-trained image identification model to obtain classification information, position information, and the like of each target, and the classification information and the position information of the target may be used as an image identification result.
In a specific embodiment, before identifying each target existing in an environment image, a plurality of environment sample images may be obtained in advance, the target existing in each environment sample image is labeled to obtain a plurality of training sample images, then the plurality of training sample images are input into a neural network model, and the neural network model is iteratively trained to obtain the image identification model.
And step 130, determining a predicted distance between each target and a preset position point in the environment image according to the distance measurement value between each target and the preset position point and the image recognition result corresponding to each target.
In this step, optionally, the distance measurement value of each target may be calibrated according to the position information of each target output by the image recognition model, and the calibrated distance measurement value is used as a predicted distance corresponding to the target, and then the predicted distance is filled into the corresponding target in the environment image, so as to obtain an image object distance fusion result.
In a specific embodiment, it is assumed that there are two targets in the environment image, which are target a and target B respectively, the distance measurement value of target a obtained by the distance sensor is equal to the distance measurement value of target B, the horizontal coordinate value of target a in the environment image information is equal to the horizontal coordinate value of target B in the environment image, but the vertical coordinate value of target a in the environment image information is not equal to the vertical coordinate value of target B in the environment image. Therefore, the respective distance measurement values can be calibrated according to the vertical coordinate values corresponding to the target a and the target B, respectively.
In the embodiment, by combining the hardware ranging sensor with the AI algorithm of the image recognition model, the distance of the target does not need to be calculated according to the reference object in the image, so that the change of the photographing environment can be avoided, and the influence on the truth of the reference object information in the image can be avoided; secondly, a distance measurement value of a target is obtained through the distance measuring sensor, and the distance measurement value is calibrated according to an image recognition result, so that the problem that an image object distance fusion result has errors due to factors such as the resolution or brightness of an environment image can be avoided.
According to the technical scheme provided by the embodiment of the invention, the distance measurement value between each target and the preset position point in the monitoring range is obtained in real time through the distance sensor, the environment image corresponding to the monitoring range is collected according to the preset time interval, each target in the environment image is identified to obtain the image identification result corresponding to each target, and the prediction distance between each target and the preset position point in the environment image is determined according to the distance measurement value between each target and the preset position point and the image identification result corresponding to each target.
Fig. 2 is a flowchart of an image object distance fusion method according to a second embodiment of the present invention, which further details the above embodiment. As shown in fig. 2, the method includes:
step 201, obtaining a distance measurement value between each target and a preset position point in a monitoring range in real time through a distance sensor.
Step 202, acquiring coordinate information corresponding to each target in the monitoring range in real time through the distance sensor.
In this step, optionally, according to the two-dimensional coordinate system corresponding to the monitoring range, the horizontal coordinate value and the vertical coordinate value corresponding to each target may be obtained through the distance sensor.
And step 203, arranging the targets according to the sequence of the horizontal coordinate values from small to large.
In this step, the objects within the monitoring range may be arranged in the order of the horizontal coordinate values from small to large.
In one specific embodiment, it is assumed that the following targets are included in the monitoring range: a target a, a target B and a target C, wherein the horizontal coordinate value of the target a is 4, the horizontal coordinate value of the target B is-6, and the horizontal coordinate value of the target C is 11, then the arranged targets are respectively: target B, target a and target C.
And 204, sequentially storing the arranged targets, the horizontal coordinate values corresponding to the targets and the distance measurement values into a distance list.
In this step, a distance list may be constructed according to the mapping relationship among the arranged targets, horizontal coordinate values, and distance measurement values, as shown in table 1:
TABLE 1
Object ID Horizontal coordinate value Distance measurement
1 -6 12m
2 4 18m
3 11 9m
…… …… ……
And step 205, acquiring an environment image corresponding to the monitoring range according to a preset time interval.
And step 206, identifying the targets in the environment image to obtain the corresponding size information of each target.
In this step, size information of each object, such as length and width of the object, may be recognized through the image recognition model.
And step 207, generating a picture frame corresponding to each target in the environment image according to the size information corresponding to each target.
In this embodiment, optionally, a frame corresponding to each object may be generated in the environment image according to the size information of each object and the boundary information corresponding to each object.
And 208, sequentially acquiring distance measurement values corresponding to the targets according to the arrangement sequence of the targets in the distance list, and combining the distance measurement values corresponding to the targets to obtain the image name of the environment image.
In this step, the distance measurement values of the targets may be sequentially obtained, and the distance measurement values of the targets may be combined according to a preset separator to obtain an image name.
In a specific embodiment, taking the distance list in table 1 as an example, assuming that the distance list includes only target 1, target 2 and target 3, the generated image name may be "12 m _18m _9 m.jpg".
And 209, sequentially filling the distance measurement values corresponding to the targets into picture frames corresponding to the targets according to the image names of the environment images.
In this step, the distance measurement value of each object may be extracted according to a preset separator from the image name of the environment image, and the distance measurement value of each object may be filled in the corresponding frame.
Step 210, determining a predicted distance between each target and a preset position point according to a distance filling result corresponding to each picture frame.
In this step, optionally, the distance filling result corresponding to each frame may be directly used as the predicted distance of the corresponding target.
In this embodiment, when the distance between two targets in the monitoring range is short, the distance sensor may only obtain the distance measurement value of one of the targets, and in order to avoid omission of the measurement value obtained by the distance sensor, this embodiment provides a way of identifying the targets by using an AI algorithm and generating a target frame, so that all targets existing in the monitoring range can be accurately obtained, and the predicted distance of each target is obtained according to the position relationship between the targets.
In an embodiment of the present invention, determining a predicted distance between each target and a preset location point according to a distance filling result corresponding to each frame includes: if the target horizontal coordinate values corresponding to the plurality of picture frames are equal, acquiring target vertical coordinate values corresponding to the plurality of picture frames respectively; and correcting the distance filling result corresponding to each picture frame according to the target vertical coordinate value corresponding to each picture frame to obtain the predicted distance between each target and a preset position point.
In a specific embodiment, assuming that the horizontal coordinate values of the two targets acquired by the distance sensor are equal, and the vertical coordinate values are not equal, the vertical coordinate values of the two targets may be arranged in order from small to large. The smaller the vertical coordinate value is, the smaller the predicted distance of the corresponding target is considered, and the larger the vertical coordinate value is, the larger the predicted distance of the corresponding target is considered. By the method, the distance filling result corresponding to each picture frame can be calibrated to obtain the predicted distance corresponding to each target.
The technical scheme provided by the embodiment of the invention comprises the steps of acquiring distance measurement values of all targets in a monitoring range in real time through a distance sensor, acquiring coordinate information corresponding to all targets in the monitoring range in real time through the distance sensor, arranging all targets according to the sequence of horizontal coordinate values from small to large, sequentially storing the arranged targets, the horizontal coordinate values corresponding to all targets and the distance measurement values into a distance list, acquiring environment images corresponding to the monitoring range according to a preset time interval, identifying the targets existing in the environment images to obtain target size information, generating target picture frames according to the size information corresponding to all targets, sequentially acquiring the distance measurement values corresponding to all targets according to the arrangement sequence of the targets in the distance list, combining the distance measurement values corresponding to all targets to obtain image names, and obtaining the distance measurement values of all targets according to the image names, the image frames corresponding to the targets are sequentially filled, and the prediction distance of each target is determined according to the distance filling result corresponding to each image frame, so that the error of the image object distance fusion result can be reduced, and the accuracy of the image object distance fusion result is improved.
Fig. 3 is a flowchart of an image object distance fusion method according to a third embodiment of the present invention, which further details the above-described embodiment. As shown in fig. 3, the method includes:
301, acquiring distance measurement values between each target and a preset position point in a monitoring range in real time through a distance sensor.
Step 302, identifying the targets existing in the environment image to obtain the size information corresponding to each target.
Step 303, generating a frame corresponding to each target in the environment image according to the size information corresponding to each target.
And step 304, sequentially acquiring distance measurement values corresponding to the targets according to the arrangement sequence of the targets in the distance list, and combining the distance measurement values corresponding to the targets to obtain the image name of the environment image.
And 305, sequentially filling the distance measurement value corresponding to each target into a picture frame corresponding to each target according to the image name of the environment image.
Step 306, determining the predicted distance between each target and a preset position point according to the distance filling result corresponding to each picture frame.
And 307, sequencing the picture frames according to the areas corresponding to the picture frames.
In this step, optionally, the frames may be sorted in order of increasing area.
And 308, determining a reference distance between each target and a preset position point according to the sorted picture frames and the target prediction distance corresponding to each picture frame.
In this embodiment, the target prediction distance is a prediction distance between a target corresponding to the picture frame and a preset position point. The reference distance may be a distance obtained by calibrating the predicted distance.
In a specific embodiment, the larger the area of the frame is, the smaller the actual distance between the corresponding object and the preset position point is, and the smaller the area of the frame is, the larger the actual distance between the corresponding object and the preset position point is. Therefore, the target prediction distance of each frame can be calibrated according to the sorted frames in the mode, and the reference distance of each target is obtained.
Step 309, determining whether the reference distance corresponding to each of the targets is consistent with the predicted distance, if yes, executing step 310, and if not, executing step 311 and 312.
And step 310, taking the predicted distance corresponding to each target as a final image object distance fusion result.
In this step, if the predicted distance corresponding to the target is consistent with the reference distance, that is, in the process of calibrating the predicted distance, when the predicted distance does not need to be changed, the predicted distance of the target is used as the final image object distance fusion result.
And 311, taking the target with the inconsistent reference distance and predicted distance as an abnormal target, and taking the predicted distance corresponding to each target as a final image object distance fusion result.
In this step, if the predicted distance corresponding to the target is inconsistent with the reference distance, that is, the predicted distance needs to be changed in the calibration process of the predicted distance, the predicted distance of the target is used as a final image object distance fusion result.
The reason for this is that, due to the difference in actual height or width between the plurality of objects, when the predicted distance of the object is not consistent with the reference distance, if the predicted distance is modified according to the area of the frame only, there is a high possibility that an error exists in the image object distance fusion result. Therefore, the accuracy of the image object distance fusion result can be improved by taking the predicted distance as the image object distance fusion result.
And 312, marking the frame of the abnormal target in the environment image according to a preset color so as to prompt a user.
In this embodiment, when the predicted distance of the target does not coincide with the reference distance, the target may be regarded as an abnormal target, and a frame of the abnormal target is marked in a special color in the environment image to prompt the user to further check the predicted distance of the target.
The technical scheme provided by the embodiment of the invention includes the steps of acquiring target distance measurement values in real time through a distance sensor, identifying targets in an environment image to obtain target size information, generating picture frames corresponding to each target, combining the distance measurement values corresponding to the targets to obtain image names, sequentially filling the distance measurement values of the targets into the picture frames, determining the prediction distance of the targets according to the distance filling result, sorting the picture frames according to the area of the picture frames, determining the reference distance of the targets according to the sorted picture frames and the target prediction distances corresponding to the picture frames, judging whether the reference distance of the targets is consistent with the prediction distance or not, if not, taking the targets as abnormal targets, taking the prediction distances of the targets as the final image object distance fusion result, marking the picture frames of the abnormal targets according to preset colors in the environment image, the technical means for prompting the user can reduce the error of the image object distance fusion result and improve the accuracy of the image object distance fusion result.
Fig. 4 is a schematic structural diagram of an image object distance fusion apparatus according to a fourth embodiment of the present invention, as shown in fig. 4, the apparatus includes: a distance measurement module 410, an object recognition module 420, and a predicted distance determination module 430.
The distance measuring module 410 is configured to obtain, in real time, distance measurement values between each target and a preset position point within a monitoring range through a distance sensor;
the target identification module 420 is configured to acquire an environment image corresponding to a monitoring range according to a preset time interval, and identify each target existing in the environment image to obtain an image identification result corresponding to each target;
the predicted distance determining module 430 is configured to determine a predicted distance between each target and a preset position point in the environment image according to a distance measurement value between each target and the preset position point and an image recognition result corresponding to each target.
According to the technical scheme provided by the embodiment of the invention, the distance measurement value between each target and the preset position point in the monitoring range is obtained in real time through the distance sensor, the environment image corresponding to the monitoring range is collected according to the preset time interval, each target in the environment image is identified to obtain the image identification result corresponding to each target, and the prediction distance between each target and the preset position point in the environment image is determined according to the distance measurement value between each target and the preset position point and the image identification result corresponding to each target.
On the basis of the above embodiment, the distance measuring module 410 includes:
the coordinate acquisition unit is used for acquiring coordinate information corresponding to each target in a monitoring range in real time through the distance sensor;
the target arrangement unit is used for arranging all the targets according to the sequence of horizontal coordinate values from small to large;
and the distance list construction unit is used for sequentially storing the arranged targets, the horizontal coordinate values corresponding to the targets and the distance measurement values into a distance list.
The object recognition module 420 includes:
the size acquisition unit is used for identifying targets existing in the environment image to obtain size information corresponding to each target;
and the picture frame generating unit is used for generating a picture frame corresponding to each target in the environment image according to the size information corresponding to each target.
The predicted distance determination module 430 includes:
the measurement value combination unit is used for sequentially obtaining the distance measurement values corresponding to the targets in the distance list according to the arrangement sequence of the targets, and combining the distance measurement values corresponding to the targets to obtain the image name of the environment image;
the measured value filling unit is used for sequentially filling the distance measured values corresponding to the targets into the picture frames corresponding to the targets according to the image names of the environment images;
the filling result processing unit is used for determining the predicted distance between each target and a preset position point according to the distance filling result corresponding to each picture frame;
the vertical coordinate value acquisition unit is used for acquiring target vertical coordinate values corresponding to a plurality of picture frames if the target horizontal coordinate values corresponding to the plurality of picture frames are equal;
the filling result correction unit is used for correcting the distance filling result corresponding to each picture frame according to the target vertical coordinate value corresponding to each picture frame to obtain the predicted distance between each target and a preset position point;
the picture frame sequencing unit is used for sequencing the picture frames according to the areas corresponding to the picture frames;
a reference distance determining unit, configured to determine, according to the sorted frames and the target prediction distances corresponding to the frames, reference distances between the targets and preset position points;
and the distance judging unit is used for judging whether the reference distance corresponding to each target is consistent with the predicted distance or not, if so, taking the predicted distance corresponding to each target as a final image object distance fusion result, otherwise, taking the target with the inconsistent reference distance and predicted distance as an abnormal target, taking the predicted distance corresponding to each target as a final image object distance fusion result, and marking a picture frame of the abnormal target according to a preset color in the environment image so as to prompt a user.
The device can execute the methods provided by all the embodiments of the invention, and has corresponding functional modules and beneficial effects for executing the methods. For technical details which are not described in detail in the embodiments of the present invention, reference may be made to the methods provided in all the aforementioned embodiments of the present invention.
FIG. 5 illustrates a schematic diagram of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM)12, a Random Access Memory (RAM)13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM)12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 may also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. Processor 11 performs the various methods and processes described above, such as the image object distance fusion method.
In some embodiments, the image object distance fusion method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the image object distance fusion method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the image object distance fusion method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image object distance fusion method, comprising:
acquiring distance measurement values between each target and a preset position point in a monitoring range in real time through a distance sensor;
acquiring an environment image corresponding to a monitoring range according to a preset time interval, and identifying each target existing in the environment image to obtain an image identification result corresponding to each target;
and determining the predicted distance between each target and the preset position point in the environment image according to the distance measurement value between each target and the preset position point and the image recognition result corresponding to each target.
2. The method of claim 1, after obtaining the distance measurement value between each target and the preset position point in the monitoring range in real time, further comprising:
acquiring coordinate information corresponding to each target in a monitoring range in real time through the distance sensor;
arranging all the targets according to the sequence of horizontal coordinate values from small to large;
and sequentially storing each arranged target, the horizontal coordinate value corresponding to each target and the distance measurement value into a distance list.
3. The method of claim 2, wherein identifying each target existing in the environment image to obtain an image identification result corresponding to each target comprises:
identifying targets existing in the environment image to obtain size information corresponding to each target;
and generating a picture frame corresponding to each target in the environment image according to the size information corresponding to each target.
4. The method of claim 3, wherein determining the predicted distance between each target and the predetermined position point in the environment image according to the distance measurement value between each target and the predetermined position point and the image recognition result corresponding to each target comprises:
sequentially acquiring distance measurement values corresponding to the targets according to the arrangement sequence of the targets in the distance list, and combining the distance measurement values corresponding to the targets to obtain the image name of the environment image;
sequentially filling the distance measurement value corresponding to each target into the picture frame corresponding to each target according to the image name of the environment image;
and determining the predicted distance between each target and a preset position point according to the distance filling result corresponding to each picture frame.
5. The method of claim 4, wherein determining the predicted distance between each target and the predetermined location point according to the distance filling result corresponding to each frame comprises:
if the target horizontal coordinate values corresponding to the plurality of picture frames are equal, acquiring target vertical coordinate values corresponding to the plurality of picture frames respectively;
and correcting the distance filling result corresponding to each picture frame according to the target vertical coordinate value corresponding to each picture frame to obtain the predicted distance between each target and a preset position point.
6. The method of claim 4, wherein after determining the predicted distance between each target and the predetermined location point according to the distance filling result corresponding to each frame, further comprising:
sorting the picture frames according to the areas corresponding to the picture frames;
determining a reference distance between each target and a preset position point according to the sorted picture frames and the target prediction distance corresponding to each picture frame;
judging whether the reference distance corresponding to each target is consistent with the predicted distance or not;
and if so, taking the predicted distance corresponding to each target as a final image object distance fusion result.
7. The method of claim 6, wherein after determining whether the reference distance corresponding to each of the objects is consistent with the predicted distance, further comprising:
if not, taking the target with the inconsistent reference distance and predicted distance as an abnormal target, and taking the predicted distance corresponding to each target as a final image object distance fusion result;
and marking the picture frame of the abnormal target in the environment image according to a preset color so as to prompt a user.
8. An image object distance fusion device, comprising:
the distance measurement module is used for acquiring distance measurement values between each target and a preset position point in a monitoring range in real time through the distance sensor;
the target identification module is used for acquiring an environment image corresponding to the monitoring range according to a preset time interval, and identifying each target in the environment image to obtain an image identification result corresponding to each target;
and the prediction distance determining module is used for determining the prediction distance between each target and the preset position point in the environment image according to the distance measurement value between each target and the preset position point and the image recognition result corresponding to each target.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image object distance fusion method of any one of claims 1-7.
10. A computer-readable storage medium having stored thereon computer instructions for causing a processor to execute the image object distance fusion method according to any one of claims 1-7.
CN202210778505.5A 2022-06-30 2022-06-30 Image object distance fusion method, device, equipment and storage medium Pending CN115035481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210778505.5A CN115035481A (en) 2022-06-30 2022-06-30 Image object distance fusion method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210778505.5A CN115035481A (en) 2022-06-30 2022-06-30 Image object distance fusion method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115035481A true CN115035481A (en) 2022-09-09

Family

ID=83129684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210778505.5A Pending CN115035481A (en) 2022-06-30 2022-06-30 Image object distance fusion method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115035481A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128782A (en) * 2023-04-19 2023-05-16 苏州苏映视图像软件科技有限公司 Image generation method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128782A (en) * 2023-04-19 2023-05-16 苏州苏映视图像软件科技有限公司 Image generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112633384A (en) Object identification method and device based on image identification model and electronic equipment
CN113095336B (en) Method for training key point detection model and method for detecting key points of target object
CN112613569B (en) Image recognition method, training method and device for image classification model
CN112863187B (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN114419035B (en) Product identification method, model training device and electronic equipment
CN113177468A (en) Human behavior detection method and device, electronic equipment and storage medium
CN112597837A (en) Image detection method, apparatus, device, storage medium and computer program product
CN113409284A (en) Circuit board fault detection method, device, equipment and storage medium
CN115656989A (en) External parameter calibration method and device, electronic equipment and storage medium
CN115035481A (en) Image object distance fusion method, device, equipment and storage medium
CN111815576A (en) Method, device, equipment and storage medium for detecting corrosion condition of metal part
CN113012441B (en) Vehicle parking detection method, system, electronic device, and storage medium
CN113219505A (en) Method, device and equipment for acquiring GPS coordinates for vehicle-road cooperative tunnel scene
CN114219003A (en) Training method and device of sample generation model and electronic equipment
CN113344906A (en) Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform
CN113029136A (en) Method, apparatus, storage medium, and program product for positioning information processing
CN115951344A (en) Data fusion method and device for radar and camera, electronic equipment and storage medium
CN114596362B (en) High-point camera coordinate calculation method and device, electronic equipment and medium
CN115422617A (en) Frame image size measuring method, device and medium based on CAD
CN112153320B (en) Method and device for measuring size of article, electronic equipment and storage medium
CN113537192B (en) Image detection method, device, electronic equipment and storage medium
CN112749978A (en) Detection method, apparatus, device, storage medium, and program product
CN113326796A (en) Object detection method, model training method and device and electronic equipment
CN110389947A (en) A kind of blacklist generation method, device, equipment and medium
CN115327497B (en) Radar detection range determining method, radar detection range determining device, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination