CN114627443B - Target detection method, target detection device, storage medium, electronic equipment and vehicle - Google Patents

Target detection method, target detection device, storage medium, electronic equipment and vehicle Download PDF

Info

Publication number
CN114627443B
CN114627443B CN202210249458.5A CN202210249458A CN114627443B CN 114627443 B CN114627443 B CN 114627443B CN 202210249458 A CN202210249458 A CN 202210249458A CN 114627443 B CN114627443 B CN 114627443B
Authority
CN
China
Prior art keywords
type
target
target object
preset
types
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210249458.5A
Other languages
Chinese (zh)
Other versions
CN114627443A (en
Inventor
张琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210249458.5A priority Critical patent/CN114627443B/en
Publication of CN114627443A publication Critical patent/CN114627443A/en
Application granted granted Critical
Publication of CN114627443B publication Critical patent/CN114627443B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The disclosure relates to a target detection method, a target detection device, a storage medium, electronic equipment and a vehicle. The method comprises the following steps: acquiring a plurality of target objects in an environment image around a target vehicle and adjacent position relations of the plurality of target objects in the environment image; and determining the target type corresponding to the target object according to the adjacent position relation and the preset relation map. The preset relation map comprises preset association relations among a plurality of target types. In this way, through the preset association relation among a plurality of target types in the preset relation map and the adjacent position relation among a plurality of target objects in the environment image, the target types of the target objects can be more accurately identified by combining the association relation among the types and the adjacent relation among the objects, the problem of identification errors caused by the identification mode aiming at single objects is avoided, and the target detection accuracy is improved, so that the reliability and the safety of automatic driving are improved.

Description

Target detection method, target detection device, storage medium, electronic equipment and vehicle
Technical Field
The disclosure relates to the technical field of artificial intelligence, and in particular relates to a target detection method, a target detection device, a storage medium, electronic equipment and a vehicle.
Background
With the development of automatic driving technology, an automatic driving vehicle accurately recognizes a target object in a surrounding environment, and becomes an important factor affecting the reliability and safety of automatic driving. In the related art, the image of the surrounding environment can be identified by a pre-trained model, so that the target object in the surrounding environment is determined, however, in some complex scenes, the problem of error detection of the target object exists, and the reliability and safety of automatic driving are affected.
Disclosure of Invention
In order to overcome the above-mentioned problems in the related art, the present disclosure provides a target detection method, apparatus, storage medium, electronic device, and vehicle.
According to a first aspect of embodiments of the present disclosure, there is provided a target detection method, the method including:
acquiring an environment image around a target vehicle;
acquiring a plurality of target objects in the environment image and adjacent position relations of the plurality of target objects in the environment image;
and determining a target type corresponding to the target object according to the adjacent position relation and a preset relation map, wherein the preset relation map comprises preset association relations among a plurality of target types.
Optionally, the determining, according to the adjacent position relationship and the preset relationship map, the target type corresponding to the target object includes:
acquiring a pending type corresponding to each target object;
and acquiring a target type corresponding to the target object according to the undetermined type, the adjacent position relation and the preset relation map.
Optionally, the obtaining, according to the pending type, the adjacent positional relationship, and the preset relationship map, the target type corresponding to the target object includes:
according to the preset relation graph, determining a first target object with a pending type being a top-level target type; the top-level object type is a type which is contained in the preset relation map and is not contained in other object types;
taking other target types with preset association relations in the top-level target types in the preset relation map as candidate types corresponding to the first target object;
determining a second target object with adjacent position relation with the first target object;
and taking the undetermined type of the second target object as the target type of the second target object in the condition that the candidate type comprises the undetermined type of the second target object.
Optionally, the method further comprises:
updating the pending type of the second target object according to the candidate type if the candidate type does not contain the pending type of the second target object;
and taking the updated undetermined type as the target type of the second target object.
Optionally, the obtaining, according to the pending type, the adjacent positional relationship, and the preset relationship map, the target type corresponding to the target object includes:
acquiring a third target object and a fourth target object with adjacent position relations;
determining whether a preset association relationship exists between a third undetermined type of the third target object and a fourth undetermined type of the fourth target object according to the preset relationship map;
and under the condition that a preset association relation exists between the third undetermined type and the fourth undetermined type, determining the target types of the third target object and the fourth target object according to the third undetermined type and the fourth undetermined type.
Optionally, the method further comprises:
updating the third undetermined type and the fourth undetermined type according to the preset relation map under the condition that the preset association relation does not exist between the third undetermined type and the fourth undetermined type;
And determining the target types of the third target object and the fourth target object according to the updated third pending type and fourth pending type.
According to a second aspect of embodiments of the present disclosure, there is provided an object detection apparatus, the apparatus comprising:
an image acquisition module configured to acquire an environmental image of a periphery of a target vehicle;
an object acquisition module configured to acquire a plurality of target objects in the environment image and adjacent positional relationships of the plurality of target objects in the environment image;
the type determining module is configured to determine a target type corresponding to the target object according to the adjacent position relation and a preset relation map, wherein the preset relation map comprises preset association relations among a plurality of target types.
Optionally, the type determining module is configured to obtain a pending type corresponding to each target object; and acquiring a target type corresponding to the target object according to the undetermined type, the adjacent position relation and the preset relation map.
Optionally, the type determining module is configured to determine, according to the preset relationship graph, a first target object of which the undetermined type is a top-level target type; the top-level object type is a type which is contained in the preset relation map and is not contained in other object types; taking other target types with preset association relations in the top-level target types in the preset relation map as candidate types corresponding to the first target object; determining a second target object with adjacent position relation with the first target object; and taking the undetermined type of the second target object as the target type of the second target object in the condition that the candidate type comprises the undetermined type of the second target object.
Optionally, the type determining module is configured to update the pending type of the second target object according to the candidate type if the candidate type does not include the pending type of the second target object; and taking the updated undetermined type as the target type of the second target object.
Optionally, the type determining module is configured to acquire a third target object and a fourth target object with adjacent position relations; determining whether a preset association relationship exists between a third undetermined type of the third target object and a fourth undetermined type of the fourth target object according to the preset relationship map; and under the condition that a preset association relation exists between the third undetermined type and the fourth undetermined type, determining the target types of the third target object and the fourth target object according to the third undetermined type and the fourth undetermined type.
Optionally, the type determining module is configured to update the third pending type and the fourth pending type according to the preset relationship map if a preset association relationship does not exist between the third pending type and the fourth pending type; and determining the target types of the third target object and the fourth target object according to the updated third pending type and fourth pending type.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the object detection method provided in the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the object detection method provided by the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a vehicle comprising the electronic device provided by the third aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: acquiring a plurality of target objects in an environment image around a target vehicle and adjacent position relations of the plurality of target objects in the environment image; and determining the target type corresponding to the target object according to the adjacent position relation and the preset relation map. The preset relation map comprises preset association relations among a plurality of target types. In this way, through the preset association relation among a plurality of target types in the preset relation map and the adjacent position relation among a plurality of target objects in the environment image, the target types of the target objects can be more accurately identified by combining the association relation among the types and the adjacent relation among the objects, the problem of identification errors caused by the identification mode aiming at single objects is avoided, and the target detection accuracy is improved, so that the reliability and the safety of automatic driving are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram showing an image of an environment in which a night vehicle is traveling, according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of object detection according to an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating a preset relationship map according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating an object detection apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram of an electronic device, shown in accordance with an exemplary embodiment.
FIG. 6 is a block diagram of a vehicle, according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, all actions for acquiring signals, information or data in the present application are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
In this disclosure, terms such as "first," "second," and the like are used to distinguish between similar objects and not necessarily to be construed as a particular order or precedence. In addition, unless otherwise stated, in the description with reference to the drawings, the same reference numerals in different drawings denote the same elements.
First, an application scenario of the present disclosure will be described. The present disclosure may be applied to an autopilot or assisted driving scenario. In the related art, images of the surrounding environment may be image-identified through a pre-trained model, so as to determine a target object in the surrounding environment, and then the vehicle may be directly controlled or assisted controlled to travel according to the target object. Therefore, the recognition accuracy of the target object has a great influence on the reliability and safety of the automatically driven vehicle, but in some complex scenes, there is a problem that the detection accuracy of the target object is not high. For example, the neural network model may be used in the related art to parse the features of the individual objects and identify them according to the features; however, this way of identifying individual objects may lead to erroneous identification if there are objects of similar shape and color.
Fig. 1 is an image of an environment in which a night vehicle is traveling, as shown in fig. 1, on a night road, since a street lamp, a lamp, and a traffic light coexist, if a single object is identified, red taillights of surrounding vehicles that are far away may be identified as red traffic lights or far away street lamps may be identified as traffic lights in case of a large traffic flow, which may affect the reliability of automatic driving.
In order to solve the above problems, the present disclosure provides a target detection method, apparatus, storage medium, electronic device, and vehicle, in which a preset relationship map including preset association relationships between a plurality of target types is preset, and the target types of the target objects can be more accurately identified by combining the association relationships between the types and the adjacent relationships between the objects by the preset association relationships between the plurality of target types in the preset relationship map and the adjacent position relationships between the plurality of target objects in the environmental image, so that the problem of identification errors caused by an identification manner for a single object is avoided, and the target object detection accuracy is improved, thereby improving the reliability and safety of automatic driving.
The present disclosure is described below in connection with specific embodiments.
Fig. 2 is a flowchart illustrating a target detection method according to an exemplary embodiment, which may be applied to an autonomous vehicle or a driving-assisted vehicle. As shown in fig. 2, the method may include:
s201, acquiring an environment image around the target vehicle.
In this step, an environmental image of the surroundings of the target vehicle may be acquired by the environmental detection means. The environment detection device can comprise one or more cameras, and the environment image of the periphery of the target vehicle, particularly the environment image in front of the vehicle, can be shot and acquired in real time through the cameras.
In some embodiments, the environment detection device may also include an infrared camera, where the infrared camera may be provided with an infrared filter, and the infrared filter may be automatically turned on or off according to different ambient brightness, so as to improve the definition of the captured environmental image. Under the condition of enough ambient brightness in daytime, the infrared filter can be started to block infrared rays from entering the lens, so that the lens can only sense visible light, and a clear ambient image can be shot; in the case of night or insufficient ambient brightness, the infrared filter may be turned off so that infrared rays enter the lens to form an image, so that a clear ambient image may be photographed at night. The lens may be a CCD (Charge coupled Device ) or CMOS (Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor).
Infrared (IR) is an electromagnetic wave having a frequency between that of microwaves and visible light, and is a generic term for radiation having a frequency of 0.3THz to 400THz in the electromagnetic spectrum, corresponding to a wavelength of 1mm to 750nm in vacuum. Because the infrared light has longer wavelength than the visible light, tiny particles can be diffracted, the night vision capability is better, the cloud and haze penetrating capability is also realized, and clear environment images can be shot in complex environments (such as night, rain and snow, heavy fog and the like). The imaging principle of the infrared camera is similar to that of human eyes, and a photosensitive element in the infrared camera can sense the irradiation of infrared rays. The photosensitive element can be a rectangular plate, a plurality of photosensitive points are placed on the plate, when infrared rays irradiate the photosensitive points, the points are activated, and then electronic transition of the photosensitive points is caused, potential differences are generated, the potential differences are converted into digital signals through analog-to-digital conversion, and finally, a photographed environment image can be generated according to the digital signals.
S202, acquiring a plurality of target objects in an environment image and adjacent position relations of the plurality of target objects in the environment image.
If the two target objects are adjacent to each other in the environment image, it can be determined that an adjacent position relationship exists between the two target objects; otherwise, if the positions of the two target objects in the environment image are not adjacent (for example, the distance between the two target objects is greater than or equal to a preset distance threshold value, or a gas target object barrier exists between the two target objects), it may be determined that an adjacent position relationship between the two target objects does not exist. Further, the adjacent positional relationship may include an up-down relationship, a left-right relationship, and an inclusion relationship.
In some embodiments, the environmental image may be input into a pre-trained object segmentation model, through which the environmental image is instance segmented, resulting in a plurality of target objects, and adjacent positional relationships of the plurality of target objects in the environmental image. The object segmentation model may be obtained by training a preset neural network model, where the preset neural network model may include a CNN (Convolutional Neural Networks, convolutional neural network) model, an RCNN (Region-CNN, regional convolutional neural network) model, a MaskRCNN model, or a BlendMask model, etc.
S203, determining the target type corresponding to the target object according to the adjacent position relation and the preset relation map.
The preset relation map comprises preset association relations among a plurality of target types. The preset association relationship may include the above-mentioned adjacent positional relationship, and for example, the preset association relationship may include an up-down relationship, a left-right relationship, an inclusion relationship, a category relationship, and the like.
In this step, the pending type corresponding to each target object may be first obtained; and then acquiring a target type corresponding to the target object according to the undetermined type, the adjacent position relation and the preset relation map.
For example, the target object may be input into a pre-trained classification model, so as to obtain a pending type corresponding to the target object. The classification recognition model can be obtained after training a preset neural network model, wherein the preset neural network model can be a general detection classification model in the prior art, and the training mode of the preset neural network model can also be seen as the training mode in the prior art, and the disclosure is not repeated.
After the undetermined type is obtained, the undetermined type of the target object can be used as the target type under the condition that the adjacent position relation of the two target objects is consistent with the preset association relation of the undetermined type corresponding to the two target objects.
Otherwise, if the adjacent position relation of the two target objects is inconsistent with the preset association relation of the undetermined types corresponding to the two target objects, the new undetermined type of the target object can be identified again. When re-identification is performed, the output result of the classification identification model can be limited, and the undetermined type obtained by the last identification is shielded, so that a new undetermined type is output through the classification identification model.
By adopting the method, a plurality of target objects in an environment image around the target vehicle and adjacent position relations of the plurality of target objects in the environment image are acquired; and determining the target type corresponding to the target object according to the adjacent position relation and the preset relation map. The preset relation map comprises preset association relations among a plurality of target types. In this way, through the preset association relation among a plurality of target types in the preset relation map and the adjacent position relation among a plurality of target objects in the environment image, the target types of the target objects can be more accurately identified by combining the association relation among the types and the adjacent relation among the objects, the problem of identification errors caused by the identification mode aiming at single objects is avoided, and the detection accuracy of the target objects is improved, so that the reliability and the safety of automatic driving are improved.
In another embodiment of the present disclosure, in the step S203, the obtaining, according to the pending type, the adjacent position relationship, and the preset relationship map, the target type corresponding to the target object may include the following steps:
first, according to the preset relation map, a first target object with a pending type being a top-level target type is determined.
The top-level object type may be a type that includes other object types in the preset relationship map and is not included in other object types.
Illustratively, FIG. 3 is a schematic diagram of a preset relationship map in which top-level object types include "car", "vegetation", "building", "road" and "overpass", other object types may be sub-object types, as shown in FIG. 3, according to an exemplary embodiment, the sub-object types shown in FIG. 3 include: "Tail light", "window", "tire", "pavement tree", "road boundary green belt", "temporary work shed", "residence", "commercial building", "street lamp", "traffic signal lamp", "non-motor vehicle sign", "pedestrian sign", "plastic road", "asphalt road", "clay road", "traffic signal lamp", "traffic sign board" and "motor vehicle lane", etc. The following describes the preset association relationship between the top-level object type and the sub-object type in fig. 3 as follows:
the sub-target types included in the top-layer target type "car" may include "tail lamp", "window" and "tire", and the preset association relationship between the target type "car" and the target type "tail lamp" may be set as an inclusion relationship in advance, and similarly, the preset association relationship between the target type "car" and the target type "window" and the preset association relationship between the target type "tire" may also be set as an inclusion relationship in advance.
The sub-target types included in the top-layer target type 'road rod' can be a 'street lamp', 'traffic light', 'non-motor vehicle sign' and a 'pedestrian sign', and the preset association relation between the target type 'road rod' and the target type 'traffic light' can be set to be a left-right relation or an up-down relation in advance, namely, the target object with the target type 'traffic light' can be characterized in an environment image and can be on the left side or the right side of the target object with the target type 'road rod', and can also be below or above the target object with the target type 'road rod'.
The sub-target types included in the top-level target type 'overpass' can be a 'traffic signal lamp', 'traffic sign' and a 'motor vehicle lane', and the preset association relation between the target type 'overpass' and the target type 'traffic signal lamp' can be set to be an inclusion relation and an upper-lower relation in advance, namely, the target object with the target type 'traffic signal lamp' is characterized in an environment image, and the target object with the target type 'traffic signal lamp' can be contained in the target object with the target type 'overpass', and can be below or above the target object with the target type 'overpass'. Thus, the system can be compatible with scenes in which the traffic signal lamp is arranged at any position of the viaduct and different adjacent relations between the traffic signal lamp and the viaduct due to the shooting angle of the target vehicle.
The preset association relationship between other top-level target types and sub-target types is the same, and the disclosure will not be repeated.
Further, there may be a preset association relationship between the top-level object types, for example, the preset association relationship between the "car" and the "road" is set to be an up-down relationship, that is, the object of the object type "car" may be included above or below the object of the object type "road" (in an uphill scene, due to the visual effect, the effect of the "car" under the "road" may be obtained from the pure visual effect analysis, through the preset association relationship, and the scene may be compatible).
It should be noted that, the preset association relationship between the top-level object types and the sub-object types may be referred to as a semantic relationship, and the preset association relationship between the top-level object types or the preset association relationship between the sub-object types may be referred to as a geometric relationship. The accurate structured scene relation under different environments can be pre-constructed through the semantic relation and the geometric relation, and the type recognition accuracy of the target object is improved.
And secondly, according to a preset relation map, acquiring other target types with preset association relation with the top-level target type as candidate types corresponding to the first target object.
The candidate type may be one or more. For example, other object types having a preset association relationship with the top object type may include two types: one type is a child object type contained by the top-level object type; the other type is other top-level target types with preset association relation with the top-level target type. For example, other object types in which the object type "car" in fig. 3 has a preset association relationship may include both sub-object types "tail lamp", "window" and "tire", and top-level object type "road".
Again, a second target object having a neighboring positional relationship with the first target object is determined.
And finally, determining the target type of the second target object according to whether the candidate type contains the undetermined type of the second target object.
Specifically, in the case where the candidate type includes the pending type of the second target object, the pending type of the second target object may be regarded as the target type of the second target object.
In the case that the candidate type does not contain the pending type of the second target object, the pending type of the second target object may be updated according to the candidate type; and taking the updated undetermined type as the target type of the second target object.
For example, if the first target object is a first vehicle, the second target object having a neighboring positional relationship with the first target object is a first taillight. When a single target object is identified (for example, when the target object is input into a pre-trained classification identification model to obtain a pending type corresponding to the target object), the pending type of the first vehicle may be identified as a "vehicle"; the tail lamp has higher similarity with the traffic signal lamp and the street lamp, so that the undetermined type corresponding to the first tail lamp can be identified as a traffic signal lamp, a street lamp or a tail lamp.
In this way, the candidate types "tail light", "window", "tire" and "road" can be obtained from the target type "car" of the first target object.
If the pending type corresponding to the first taillight is identified as "taillight", since the candidate type includes the pending type "taillight", it can be determined that the target type of the first taillight is correctly identified, and the "taillight" is used as the target type.
On the contrary, if the pending type corresponding to the first tail lamp is identified as a traffic signal lamp or a street lamp, since the candidate type does not contain the pending type of the traffic signal lamp or the street lamp, the target type identification error of the first tail lamp can be determined; the pending type of the second target object may be updated according to candidate types ("tail light", "window", "tire" and "road"), for example, a type (e.g. "tail light") having the highest similarity with the second target object (first tail light) and having a similarity greater than a preset threshold may be selected from the candidate types as the updated pending type; the updated pending type ("taillight") may then be used as the target type for the second target object (first taillight). Thus, error in type identification can be avoided, and accuracy of type identification can be improved.
In this way, the target types of the target objects are identified and corrected from top to bottom according to the preset association relation among the plurality of target types in the preset relation map, so that the accuracy of identifying the types of the target objects can be further improved.
In another embodiment of the present disclosure, in the step S203, the target type corresponding to the target object is obtained according to the undetermined type, the adjacent position relationship and the preset relationship map, and the method may further include the following steps:
first, a third target object and a fourth target object having a neighboring positional relationship are acquired.
And secondly, determining whether a preset association relationship exists between the third type to be determined of the third target object and the fourth type to be determined of the fourth target object according to the preset relationship map.
And finally, determining the target type of the target object according to whether a preset association relation exists between the third to-be-determined type and the fourth to-be-determined type.
Specifically, in the case that a preset association relationship exists between the third pending type and the fourth pending type, the target types of the third target object and the fourth target object may be determined according to the third pending type and the fourth pending type. For example, the third pending type may be the target type of the third target object and the fourth pending type may be the target type of the fourth target object.
Under the condition that a preset association relation does not exist between the third to-be-determined type and the fourth to-be-determined type, updating the third to-be-determined type and the fourth to-be-determined type according to a preset relation map; and the target types of the third target object and the fourth target object can be determined according to the updated third pending type and fourth pending type. For example, the updated third pending type may be the target type of the third target object and the updated fourth pending type may be the target type of the fourth target object.
The following will also take the vehicle and the tail lamp as examples, and the above steps will be described as follows:
according to the environment image, a third target object with an adjacent position relationship can be obtained as a second vehicle, and a fourth target object can be obtained as a second tail lamp.
In the case of performing the type recognition on the third target object and the fourth target object, the pending type of the second vehicle may be recognized as "vehicle" as well; and because the tail lamp has higher similarity with the traffic signal lamp and the street lamp, the undetermined type corresponding to the second tail lamp can be identified as a traffic signal lamp, a street lamp or a tail lamp.
If the third undetermined type of the third target object (the second vehicle) is identified as a vehicle, and the fourth undetermined type of the fourth target object (the second tail lamp) is identified as a tail lamp, determining that a preset association relationship exists between the undetermined types of the two target objects (the vehicle and the tail lamp) according to a preset relationship map; in this way, the respective pending type may be regarded as the respective target type, that is, the target type of the third target object (second vehicle) is determined to be "car", and the target type of the fourth target object (second tail lamp) is determined to be "tail lamp".
On the contrary, if the third undetermined type of the third target object (the second vehicle) is identified as a vehicle, and the fourth undetermined type of the fourth target object (the second tail lamp) is identified as a traffic light, determining that a preset association relationship does not exist between the undetermined types of the two target objects (the vehicle and the traffic light) according to a preset relationship map; thus, an identification error can be determined between the two types, and the third type to be determined and the fourth type to be determined can be updated according to a preset relation map; and determining the target types of the third target object and the fourth target object according to the updated third pending type and fourth pending type.
Further, the method for updating the third predetermined type and the fourth predetermined type according to the preset relationship map may include:
firstly, obtaining a third similarity between a third target object and a third to-be-determined type and a fourth similarity between a fourth target object and a fourth to-be-determined type;
secondly, when the third similarity is greater than or equal to the fourth similarity, keeping the third pending type unchanged (or, taking the current third pending type as the updated third pending type); and acquiring one or more third association types (such as 'tail lamp', 'car window', 'tyre' and 'road') which have a preset association relation with a third association type (such as 'car'), and acquiring a type (such as 'tail lamp') which has the highest similarity with a fourth target object (second tail lamp) and has a similarity larger than a preset threshold value from the one or more third association types as an updated fourth association type.
Otherwise, if the third similarity is smaller than the fourth similarity, the fourth pending type is kept unchanged (or the current fourth pending type is used as the updated fourth pending type); and acquiring one or more fourth association types which have preset association relations with a fourth association type (such as a traffic light), and acquiring a type which has highest similarity with a third target object (such as a second vehicle) and is more than a preset threshold value from the one or more fourth association types as an updated third association type.
Therefore, aiming at two target objects with adjacent position relations in the environment image, the target types of the two target objects can be identified and corrected according to the preset association relation in the preset relation map, and the accuracy of identifying the target objects is improved.
Fig. 4 is a block diagram of an object detection apparatus 400, according to an exemplary embodiment, as shown in fig. 4, the apparatus 400 may include:
an image acquisition module 401 configured to acquire an environmental image of the periphery of the target vehicle;
an object acquisition module 402 configured to acquire a plurality of target objects in the environment image, and adjacent positional relationships of the plurality of target objects in the environment image;
The type determining module 403 is configured to determine a target type corresponding to the target object according to the adjacent position relationship and a preset relationship map, where the preset relationship map includes preset association relationships between a plurality of target types.
Optionally, the type determining module 403 is configured to obtain a pending type corresponding to each of the target objects; and acquiring a target type corresponding to the target object according to the undetermined type, the adjacent position relation and the preset relation map.
Optionally, the type determining module 403 is configured to determine, according to the preset relationship graph, a first target object of which the pending type is a top-level target type; the top-level object type is a type which is contained in the preset relation map and is not contained in other object types; taking other target types with preset association relations existing in the top-level target types in the preset relation map as candidate types corresponding to the first target object; determining a second target object with adjacent position relation with the first target object; and taking the undetermined type of the second target object as the target type of the second target object in the case that the candidate type comprises the undetermined type of the second target object.
Optionally, the type determining module 403 is configured to update the pending type of the second target object according to the candidate type if the candidate type does not include the pending type of the second target object; and taking the updated undetermined type as the target type of the second target object.
Optionally, the type determining module 403 is configured to acquire a third target object and a fourth target object having a neighboring positional relationship; determining whether a preset association relationship exists between a third type to be determined of the third target object and a fourth type to be determined of the fourth target object according to the preset relationship map; and under the condition that a preset association relation exists between the third pending type and the fourth pending type, determining the target types of the third target object and the fourth target object according to the third pending type and the fourth pending type.
Optionally, the type determining module 403 is configured to update the third pending type and the fourth pending type according to the preset relationship map if there is no preset association between the third pending type and the fourth pending type; and determining the target types of the third target object and the fourth target object according to the updated third undetermined type and fourth undetermined type.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the object detection method provided by the present disclosure. The computer readable storage medium may be a non-transitory computer readable storage medium.
Fig. 5 is a block diagram of an electronic device 900, shown in accordance with an exemplary embodiment. For example, electronic device 900 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, router, or the like.
Referring to fig. 5, an electronic device 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls overall operation of the electronic device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 902 may include one or more processors 920 to execute instructions to perform all or part of the steps of the object detection method described above. Further, the processing component 902 can include one or more modules that facilitate interaction between the processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations at the electronic device 900. Examples of such data include instructions for any application or method operating on the electronic device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 906 provides power to the various components of the electronic device 900. Power components 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 900.
The multimedia component 908 comprises a screen between the electronic device 900 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front-facing camera and/or a rear-facing camera. When the electronic device 900 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 904 or transmitted via the communication component 916. In some embodiments, the audio component 910 further includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 914 includes one or more sensors for providing status assessment of various aspects of the electronic device 900. For example, the sensor assembly 914 may detect an on/off state of the electronic device 900, a relative positioning of the components, such as a display and keypad of the electronic device 900, the sensor assembly 914 may also detect a change in position of the electronic device 900 or a component of the electronic device 900, the presence or absence of a user's contact with the electronic device 900, an orientation or acceleration/deceleration of the electronic device 900, and a change in temperature of the electronic device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communication between the electronic device 900 and other devices, either wired or wireless. The electronic device 900 may access a wireless network based on a communication standard, such as Wi-Fi,2G, 3G, 4G, 5G, NB-IOT, eMTC, or other 6G, etc., or a combination thereof. In one exemplary embodiment, the communication component 916 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the above-described target detection methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory 904 including instructions executable by the processor 920 of the electronic device 900 to perform the above-described target detection method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described object detection method when executed by the programmable apparatus.
Fig. 6 is a block diagram of a vehicle, as shown in fig. 6, that may include the electronic device 900 described above, according to an example embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A method of target detection, the method comprising:
acquiring an environment image around a target vehicle;
acquiring a plurality of target objects in the environment image and adjacent position relations of the plurality of target objects in the environment image;
determining a target type corresponding to the target object according to the adjacent position relation and a preset relation map, wherein the preset relation map comprises preset association relations among a plurality of target types;
the determining the target type corresponding to the target object according to the adjacent position relation and the preset relation map comprises the following steps:
acquiring a pending type corresponding to each target object;
acquiring a target type corresponding to the target object according to the undetermined type, the adjacent position relation and the preset relation map;
the obtaining the target type corresponding to the target object according to the undetermined type, the adjacent position relation and the preset relation map comprises the following steps:
according to the preset relation graph, determining a first target object with a pending type being a top-level target type; the top-level object type is a type which is contained in the preset relation map and is not contained in other object types;
Taking other target types with preset association relation with the top target type in the preset relation map as candidate types corresponding to the first target object;
determining a second target object with adjacent position relation with the first target object;
and taking the undetermined type of the second target object as the target type of the second target object in the condition that the candidate type comprises the undetermined type of the second target object.
2. The method according to claim 1, wherein the method further comprises:
updating the pending type of the second target object according to the candidate type if the candidate type does not contain the pending type of the second target object;
and taking the updated undetermined type as the target type of the second target object.
3. The method according to claim 1, wherein the obtaining the target type corresponding to the target object according to the pending type, the neighboring relationship, and the preset relationship map includes:
acquiring a third target object and a fourth target object with adjacent position relations;
determining whether a preset association relationship exists between a third undetermined type of the third target object and a fourth undetermined type of the fourth target object according to the preset relationship map;
And under the condition that a preset association relation exists between the third undetermined type and the fourth undetermined type, determining the target types of the third target object and the fourth target object according to the third undetermined type and the fourth undetermined type.
4. A method according to claim 3, characterized in that the method further comprises:
updating the third undetermined type and the fourth undetermined type according to the preset relation map under the condition that the preset association relation does not exist between the third undetermined type and the fourth undetermined type;
and determining the target types of the third target object and the fourth target object according to the updated third pending type and fourth pending type.
5. An object detection device, the device comprising:
an image acquisition module configured to acquire an environmental image of a periphery of a target vehicle;
an object acquisition module configured to acquire a plurality of target objects in the environment image and adjacent positional relationships of the plurality of target objects in the environment image;
the type determining module is configured to determine a target type corresponding to the target object according to the adjacent position relation and a preset relation map, wherein the preset relation map comprises preset association relations among a plurality of target types;
The type determining module is configured to acquire a pending type corresponding to each target object; acquiring a target type corresponding to the target object according to the undetermined type, the adjacent position relation and the preset relation map;
the type determining module is configured to determine a first target object with a pending type being a top-level target type according to the preset relation graph; the top-level object type is a type which is contained in the preset relation map and is not contained in other object types; taking other target types with preset association relations in the top-level target types in the preset relation map as candidate types corresponding to the first target object; determining a second target object with adjacent position relation with the first target object; and taking the undetermined type of the second target object as the target type of the second target object in the condition that the candidate type comprises the undetermined type of the second target object.
6. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any one of claims 1 to 4.
7. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1 to 4.
8. A vehicle, characterized in that it comprises the electronic device of claim 6.
CN202210249458.5A 2022-03-14 2022-03-14 Target detection method, target detection device, storage medium, electronic equipment and vehicle Active CN114627443B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210249458.5A CN114627443B (en) 2022-03-14 2022-03-14 Target detection method, target detection device, storage medium, electronic equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210249458.5A CN114627443B (en) 2022-03-14 2022-03-14 Target detection method, target detection device, storage medium, electronic equipment and vehicle

Publications (2)

Publication Number Publication Date
CN114627443A CN114627443A (en) 2022-06-14
CN114627443B true CN114627443B (en) 2023-06-09

Family

ID=81902987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210249458.5A Active CN114627443B (en) 2022-03-14 2022-03-14 Target detection method, target detection device, storage medium, electronic equipment and vehicle

Country Status (1)

Country Link
CN (1) CN114627443B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115538888A (en) * 2022-10-24 2022-12-30 重庆长安汽车股份有限公司 Vehicle window control method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310604A (en) * 2020-01-21 2020-06-19 华为技术有限公司 Object detection method and device and storage medium
CN113326715A (en) * 2020-02-28 2021-08-31 初速度(苏州)科技有限公司 Target association method and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463899B (en) * 2014-12-31 2017-09-22 北京格灵深瞳信息技术有限公司 A kind of destination object detection, monitoring method and its device
JP6649865B2 (en) * 2016-10-27 2020-02-19 株式会社Soken Object detection device
CN109740415B (en) * 2018-11-19 2021-02-09 深圳市华尊科技股份有限公司 Vehicle attribute identification method and related product
CN113673282A (en) * 2020-05-14 2021-11-19 华为技术有限公司 Target detection method and device
CN111814538B (en) * 2020-05-25 2024-03-05 北京达佳互联信息技术有限公司 Method and device for identifying category of target object, electronic equipment and storage medium
CN112417967B (en) * 2020-10-22 2021-12-14 腾讯科技(深圳)有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN112329772B (en) * 2020-11-06 2024-03-05 浙江大搜车软件技术有限公司 Vehicle part identification method, device, electronic device and storage medium
CN113283396A (en) * 2021-06-29 2021-08-20 艾礼富电子(深圳)有限公司 Target object class detection method and device, computer equipment and storage medium
CN114120287A (en) * 2021-12-03 2022-03-01 腾讯科技(深圳)有限公司 Data processing method, data processing device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310604A (en) * 2020-01-21 2020-06-19 华为技术有限公司 Object detection method and device and storage medium
CN113326715A (en) * 2020-02-28 2021-08-31 初速度(苏州)科技有限公司 Target association method and device

Also Published As

Publication number Publication date
CN114627443A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
JP7035270B2 (en) Vehicle door unlocking methods and devices, systems, vehicles, electronic devices and storage media
CN110335389B (en) Vehicle door unlocking method, vehicle door unlocking device, vehicle door unlocking system, electronic equipment and storage medium
CN112419328A (en) Image processing method and device, electronic equipment and storage medium
CN110543850B (en) Target detection method and device and neural network training method and device
CN114620072B (en) Vehicle control method and device, storage medium, electronic equipment and vehicle
CN112149697A (en) Indicating information identification method and device of indicator lamp, electronic equipment and storage medium
WO2021057244A1 (en) Light intensity adjustment method and apparatus, electronic device and storage medium
CN111507973B (en) Target detection method and device, electronic equipment and storage medium
US20210150232A1 (en) Method and device for detecting a state of signal indicator light, and storage medium
CN114627443B (en) Target detection method, target detection device, storage medium, electronic equipment and vehicle
EP3309711A1 (en) Vehicle alert apparatus and operating method thereof
CN114312812B (en) Vehicle control method and device based on dynamic perception and electronic equipment
CN113313115B (en) License plate attribute identification method and device, electronic equipment and storage medium
CN105719488A (en) License plate recognition method and apparatus, and camera and system for license plate recognition
CN115965935B (en) Object detection method, device, electronic apparatus, storage medium, and program product
CN114723715B (en) Vehicle target detection method, device, equipment, vehicle and medium
CN107458299B (en) Vehicle lamp control method and device and computer readable storage medium
KR101986463B1 (en) Parking guidance system and method for controlling thereof
CN116206363A (en) Behavior recognition method, apparatus, device, storage medium, and program product
CN116834767A (en) Motion trail generation method, device, equipment and storage medium
CN107992789B (en) Method and device for identifying traffic light and vehicle
CN116052461A (en) Virtual parking space determining method, display method, device, equipment, medium and program
CN112949556B (en) Light intensity control method and device, electronic equipment and storage medium
CN114633764B (en) Traffic signal lamp detection method and device, storage medium, electronic equipment and vehicle
CN116985839A (en) Motion control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant