CN116311976A - Signal lamp control method, device, equipment and computer readable storage medium - Google Patents

Signal lamp control method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN116311976A
CN116311976A CN202211689956.8A CN202211689956A CN116311976A CN 116311976 A CN116311976 A CN 116311976A CN 202211689956 A CN202211689956 A CN 202211689956A CN 116311976 A CN116311976 A CN 116311976A
Authority
CN
China
Prior art keywords
road
image
color
signal lamp
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211689956.8A
Other languages
Chinese (zh)
Inventor
孙连鹏
李良斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202211689956.8A priority Critical patent/CN116311976A/en
Publication of CN116311976A publication Critical patent/CN116311976A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • G08G1/085Controlling traffic signals using a free-running cyclic timer
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a control method, a device and equipment of a signal lamp and a computer readable storage medium, and belongs to the technical field of control. The method comprises the following steps: acquiring a first image of a first road, wherein the color of a signal lamp of the first road is a first color, and the first color is used for indicating that the first road cannot pass; identifying the first image to obtain a first identification result; indicating a first image to comprise vehicles waiting for signal lights passing through a first road based on a first identification result, and acquiring a second image of a second road, wherein the color of the signal lights of the second road is a second color, and the second color is used for indicating that the second road is passable; identifying the second image to obtain a second identification result; the color of the signal lamp of the first road is controlled to be changed to the second color based on the second recognition result indicating that the second image does not include the vehicle waiting to pass the signal lamp of the second road. The method enables the vehicles on the first road to pass quickly, and improves the passing efficiency of the vehicles on the first road.

Description

Signal lamp control method, device, equipment and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of control, in particular to a control method, a device, equipment and a computer readable storage medium of a signal lamp.
Background
Along with the continuous improvement of the living standard of people, automobiles are more and more popular in the life of people, and in the driving process, the condition of waiting for a signal lamp is frequently met, especially in the rush hour, traffic is very congested, and the rush hour is seriously delayed.
However, no matter how much traffic flow at the intersection, the duration of the traffic light at the intersection is fixed, at this time, the traffic light at the first road is very large, but the traffic light at the first road is red light, the traffic light at the second road is very small, but the traffic light at the second road is green light, so that the first road is seriously congested, and the second road is smooth, but few vehicles are present, so that the traffic duration of the vehicles at the first road is longer, and the traffic efficiency is lower.
Disclosure of Invention
The embodiment of the application provides a control method, a device, equipment and a computer readable storage medium for a signal lamp, which can be used for solving the problems in the related art. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for controlling a signal lamp, where the method includes:
Acquiring a first image of a first road, wherein the color of a signal lamp of the first road is a first color, and the first color is used for indicating that the first road cannot pass;
identifying the first image to obtain a first identification result;
the first image is indicated to comprise vehicles waiting to pass through signal lamps of the first road based on the first identification result, a second image of a second road is obtained, the second road and the first road are crossed roads, the signal lamps of the second road are of a second color, and the second color is used for indicating that the second road is passable;
identifying the second image to obtain a second identification result;
and controlling the color of the signal lamp of the first road to become the second color based on the second recognition result indicating that the second image does not include the vehicle waiting to pass the signal lamp of the second road.
In one possible implementation manner, the identifying the first image to obtain a first identification result includes:
identifying the first image to obtain first content, wherein the first content comprises object information of an object included in the first image;
Determining an actual distance from a first vehicle to a first reference line based on the first content indication that the first image comprises the first vehicle, wherein the first reference line is a stop line of a signal lamp of the first road;
based on the actual distance of the first vehicle from the first reference line being less than a first threshold, it is determined that the first recognition result indicates that the first image includes a vehicle waiting to pass a signal light of the first road.
In one possible implementation, the determining the actual distance of the first vehicle from the first reference line includes:
acquiring a first proportion of the first image, wherein the first proportion is a pixel proportion of a first image pickup device for picking up the first image;
determining an image distance of the first vehicle from the first reference line in the first image;
and converting the image distance according to the first proportion to obtain the actual distance from the first vehicle to the first reference line.
In one possible implementation, the actual distance from the first vehicle to the first reference line is the actual distance from the head of the first vehicle to the first reference line;
or, the actual distance from the first vehicle to the first reference line is the actual distance from the tail of the first vehicle to the first reference line.
In one possible implementation manner, the identifying the second image to obtain a second identification result includes:
determining a target area in the second image, wherein the target area is determined based on a second reference line, and the second reference line is a stop line of a signal lamp of the second road;
identifying the target area to obtain second content, wherein the second content comprises object information of objects included in the target area;
and determining that the second identification result indicates that the second image does not include vehicles waiting to pass through the signal lamp of the second road based on the second content indicating that the vehicles are not included in the target area.
In one possible implementation manner, the determining the target area in the second image includes:
acquiring a second proportion of the second image, wherein the second proportion is a pixel proportion of a second image pickup device for picking up the second image;
determining a target distance according to the second proportion and a second threshold value;
and taking the area which is the target distance and does not pass through the second reference line as the target area.
In one possible implementation, the method further includes:
and controlling the color of the signal lamp of the second road to become the first color based on the second recognition result indicating that the second image does not include the vehicle waiting to pass the signal lamp of the second road.
In another aspect, an embodiment of the present application provides a control device for a signal lamp, including:
the system comprises an acquisition module, a first road detection module and a second road detection module, wherein the acquisition module is used for acquiring a first image of a first road, the color of a signal lamp of the first road is a first color, and the first color is used for indicating that the first road cannot pass;
the identification module is used for identifying the first image to obtain a first identification result;
the acquiring module is further configured to instruct, based on the first recognition result, that the first image includes a vehicle waiting for passing through a signal lamp of the first road, acquire a second image of a second road, where the second road and the first road are intersecting roads, and a color of the signal lamp of the second road is a second color, where the second color is used to instruct that the second road is passable;
the identification module is further used for identifying the second image to obtain a second identification result;
And a control module for controlling the color of the signal lamp of the first road to be changed to the second color based on the second recognition result indicating that the second image does not include the vehicle waiting to pass the signal lamp of the second road.
In a possible implementation manner, the identifying module is configured to identify the first image, so as to obtain first content, where the first content includes object information of an object included in the first image; determining an actual distance from a first vehicle to a first reference line based on the first content indication that the first image comprises the first vehicle, wherein the first reference line is a stop line of a signal lamp of the first road; based on the actual distance of the first vehicle from the first reference line being less than a first threshold, it is determined that the first recognition result indicates that the first image includes a vehicle waiting to pass a signal light of the first road.
In one possible implementation manner, the identifying module is configured to obtain a first proportion of the first image, where the first proportion is a pixel proportion of a first image capturing device that captures the first image; determining an image distance of the first vehicle from the first reference line in the first image; and converting the image distance according to the first proportion to obtain the actual distance from the first vehicle to the first reference line.
In one possible implementation, the actual distance from the first vehicle to the first reference line is the actual distance from the head of the first vehicle to the first reference line;
or, the actual distance from the first vehicle to the first reference line is the actual distance from the tail of the first vehicle to the first reference line.
In a possible implementation manner, the identifying module is configured to determine a target area in the second image, where the target area is determined based on a second reference line, and the second reference line is a stop line of a signal lamp of the second road; identifying the target area to obtain second content, wherein the second content comprises object information of objects included in the target area; and determining that the second identification result indicates that the second image does not include vehicles waiting to pass through the signal lamp of the second road based on the second content indicating that the vehicles are not included in the target area.
In a possible implementation manner, the identifying module is configured to obtain a second proportion of the second image, where the second proportion is a pixel proportion of a second image capturing device that captures the second image; determining a target distance according to the second proportion and a second threshold value; and taking the area which is the target distance and does not pass through the second reference line as the target area.
In one possible implementation, the control module is further configured to control a color of the signal light of the second road to change to the first color based on the second identification result indicating that the second image does not include a vehicle waiting to pass the signal light of the second road.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor, so that the computer device implements a method for controlling a signal lamp according to any one of the foregoing methods.
In another aspect, there is provided a computer readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to cause a computer device to implement a method for controlling a signal lamp as described in any one of the above.
In another aspect, there is also provided a computer program or computer program product having stored therein at least one computer instruction that is loaded and executed by a processor to cause a computer device to implement a method of controlling any one of the above-mentioned signal lights.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects:
according to the technical scheme, vehicles waiting for the signal lamp passing through the first road exist on the first road, the color of the signal lamp of the first road is the first color, and the color of the signal lamp of the second road does not exist on the second road, but when the color of the signal lamp of the second road is the second color, the color of the signal lamp of the first road is adjusted, so that the vehicles on the first road can pass quickly, the passing time of the vehicles on the first road is reduced, and the passing efficiency of the vehicles on the first road is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation environment of a control method of a signal lamp according to an embodiment of the present application;
Fig. 2 is a flowchart of a method for controlling a signal lamp according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a first image provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a signal lamp control according to an embodiment of the present application;
fig. 5 is a flowchart of a method for controlling a signal lamp according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a control device for a signal lamp according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of a method for controlling a signal lamp according to an embodiment of the present application, as shown in fig. 1, where the implementation environment includes: an electronic apparatus 101, a first image pickup device 102, a second image pickup device 103, a first signal lamp 104, and a second signal lamp 105. The electronic device 101 may be a terminal device or a server, which is not limited in the embodiment of the present application. The first camera 102 is used for acquiring a road image of a first road, the second camera 103 is used for acquiring a road image of a second road, the first signal lamp 104 is a signal lamp corresponding to the first road, and the second signal lamp 105 is a signal lamp corresponding to the second road. The electronic device 101 controls the first signal lamp 104 and the second signal lamp 105 through interaction with the first image capturing device 102 and the second image capturing device 103, so as to realize the signal lamp control method provided by the embodiment of the application.
Optionally, when the electronic device 101 is a terminal device, the terminal device is any electronic device product that can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction or a handwriting device, for example, a PC (Personal Computer ), a mobile phone, a smart phone, a PDA (Personal Digital Assistant, a personal digital assistant), a wearable device, a PPC (Pocket PC), a tablet computer, a smart car machine, a smart television, a smart speaker, and the like. When the electronic device 101 is a server, the server may be a server, a server cluster formed by a plurality of server units, or a cloud computing service center.
It will be appreciated by those skilled in the art that the above described terminal devices and servers are merely illustrative, and that other terminal devices or servers, now existing or hereafter may be present, are intended to be within the scope of the present application, and are incorporated herein by reference.
The embodiment of the present application provides a method for controlling a signal lamp, which may be applied to the implementation environment shown in fig. 1, and takes a flowchart of the method for controlling a signal lamp, which is shown in fig. 2 and provided in the embodiment of the present application, as an example, the method may be executed by the electronic device 101 in fig. 1, where the electronic device 101 may be a terminal device or a server. As shown in fig. 2, the method includes the following steps 201 to 205.
In step 201, a first image of a first road is acquired, and a signal light of the first road has a first color, where the first color is used to indicate that the first road is not passable.
In an exemplary embodiment of the present application, the electronic device and the first image capturing device are in communication connection through a wired network or a wireless network, and the first image capturing device is configured to capture a road image of a first road. The first image capturing device may perform the process of capturing the road image of the first road after receiving the image capturing request, and the first image capturing device may further perform the process of capturing the road image of the first road once every a reference period.
The reference time period is set based on experience, or is adjusted according to the implementation environment, which is not limited in the embodiment of the present application. The reference time period is, for example, 5 seconds. That is, the first imaging device acquires the road image of the first road every 5 seconds.
After the first image of the first road is acquired by the first camera device, the first image of the first road is sent to the electronic equipment, so that the electronic equipment acquires the first image of the first road. The color of the signal lamp of the first road is a first color, and the first color is used for indicating that the first road cannot pass. For example, the signal light of the first road is a traffic light on the first road, and the signal light of the first road is red in color.
Optionally, after the first image capturing device captures the first image of the first road, the first image capturing device may further record a capturing time of the first image captured of the first road, and when the first image capturing device sends the first image of the first road to the electronic device, the first image capturing device may further send the capturing time of the first image captured of the first road to the electronic device. The electronic equipment acquires the acquisition time of the first image of the first road, and the acquisition time of the first image of the first road is used for subsequently acquiring the second image of the second road.
Alternatively, the first image capturing device may be a camera set up on the first road, or may be another device, which is not limited in this embodiment of the present application.
In step 202, the first image is identified, and a first identification result is obtained.
In one possible implementation manner, after the electronic device acquires the first image, the process of identifying the first image and obtaining the first identification result includes: identifying the first image to obtain first content, wherein the first content comprises object information of an object included in the first image; determining an actual distance of the first vehicle to the first reference line based on the first content indicating that the first vehicle is included in the first image; based on the actual distance of the first vehicle from the first reference line being less than a first threshold, it is determined that the first recognition result indicates that the first image includes a vehicle waiting to pass through a signal light of the first road. The object information may be an object name, or may be other information capable of representing an object, which is not limited in the embodiment of the present application. The first reference line is a stop line of a signal lamp of the first road, and when the color of the signal lamp of the first road is the first color, a line which cannot be crossed by a vehicle traveling on the first road is the first reference line. The first threshold is set empirically or adjusted according to the implementation environment, which is not limited in this embodiment. The first threshold is, for example, 5 meters. Fig. 3 is a schematic diagram of a first image according to an embodiment of the present application, where 301 is a first reference line.
In some embodiments, identifying the first image to obtain the first content includes: inputting the first image into an image recognition model, and obtaining first content based on the output result of the image recognition model, wherein the first content comprises object information of an object included in the first image. According to the first content, it is determined whether a vehicle is included in the first image. Optionally, determining that the first image includes a vehicle based on the first content including the target keyword; based on the first content not including the target keyword, it is determined that the first image does not include the vehicle. The target keywords may be one or more of "car", "vehicle", "car". Illustratively, the first content includes an automobile, a tree, a zebra crossing. The target keyword is "vehicle", "car", and since the target keyword is included in the first content, it is determined that the vehicle is included in the first image.
The image recognition model may be any model capable of recognizing object information of an object included in an image, and this is not limited in the embodiment of the present application. The image recognition model is illustratively a convolutional neural network model (Convolutional Neural Networks, CNN).
Optionally, the actual distance of the first vehicle to the first reference line is the actual distance of a certain portion of the first vehicle to the first reference line. For example, the actual distance of the first vehicle to the first reference line is the actual distance of the head of the first vehicle to the first reference line; for another example, the actual distance from the first vehicle to the first reference line is the actual distance from the tail of the first vehicle to the first reference line, which is not limited in the embodiment of the present application.
The process of determining the actual distance of the first vehicle from the first reference line is also not limited in the embodiments of the present application. Optionally, determining the actual distance of the first vehicle from the first reference line comprises: acquiring a first proportion of a first image, wherein the first proportion is a pixel proportion of a first image pickup device for picking up the first image; determining an image distance of the first vehicle to a first reference line in the first image; according to the first proportion, the image distance is converted, and the actual distance from the first vehicle to the first reference line is obtained.
The process of acquiring the first proportion of the first image by the electronic equipment comprises the following steps: the electronic equipment sends a first proportion acquisition request to the first camera device, wherein the first proportion acquisition request is used for acquiring the pixel proportion of the first camera device; the first camera device receives the first proportion obtaining request, the first camera device stores the first proportion, obtains the first proportion according to the first proportion obtaining request, and then sends the first proportion to the electronic equipment so that the electronic equipment obtains the first proportion. Illustratively, the first ratio is 1:5.
Alternatively, the electronic apparatus stores the pixel ratio of each image pickup device, the correspondence between each image pickup device, and the correspondence between each image pickup device and the road. The electronic equipment determines a first image pickup device corresponding to the first road according to the first road, further obtains the pixel proportion of the first image pickup device, and takes the pixel proportion of the first image pickup device as a first proportion.
For example, if the actual distance from the first vehicle to the first reference line is the actual distance from the head of the first vehicle to the first reference line, it is necessary to determine the image distance from the head of the first vehicle to the first reference line in the first image. For another example, if the actual distance from the first vehicle to the first reference line is the actual distance from the tail of the first vehicle to the first reference line, it is necessary to determine the image distance from the tail of the first vehicle to the first reference line in the first image.
In one possible implementation, an application for determining a distance is installed and executed in the electronic device, and the application may be any type of application, which is not limited by the embodiments of the present application. Illustratively, the application is a rangefinder. The electronic device invokes the application for determining distance to determine an image distance of the first vehicle to the first reference line in the first image.
After determining the image distance from the first vehicle to the first reference line in the first image, converting the image distance according to a first proportion to obtain the actual distance from the first vehicle to the first reference line, wherein the process comprises the following steps of: according to the first proportion, the image distance is converted according to the following formula (1) to obtain the actual distance from the first vehicle to the first reference line.
Figure BDA0004020888330000091
In the above-mentioned formula (1),
Figure BDA0004020888330000092
for the first scale, X is the image distance and Y is the actual distance of the first vehicle from the first reference line.
The first scale of the first image is illustratively
Figure BDA0004020888330000093
If the image distance from the first vehicle to the first reference line in the first image is 0.5 m, the actual distance from the first vehicle to the first reference line is determined to be 5 m according to the above formula (1).
It should be noted that if a plurality of vehicles are included in the first image, an actual distance from each vehicle to the first reference line needs to be determined, and if only one of the plurality of vehicles has an actual distance from the first reference line smaller than the first threshold value, it is determined that the first recognition result indicates that the first image includes a vehicle waiting to pass through the signal lamp of the first road. If there is no vehicle of the plurality of vehicles having an actual distance to the first reference line less than the first threshold value, it is determined that the first recognition result indicates that the first image does not include a vehicle waiting to pass through a signal light of the first road. The process of determining the actual distance between each vehicle and the first reference line is similar to the process of determining the actual distance between the first vehicle and the first reference line, and will not be described herein.
In step 203, a second image of a second road is acquired based on the first recognition result indicating that the first image includes a vehicle waiting to pass a signal light of the first road.
In one possible implementation, the electronic device obtains a second image of the second road based on the first recognition result indicating that the first image includes a vehicle waiting to pass a signal light of the first road. The second road and the first road are crossed roads, and the signal lamp of the second road is of a second color, and the second color is used for indicating that the second road can pass through. The first road and the second road are illustratively cross roads, and the signal lamp of the second road is green in color. The electronic device obtains a second image of the second link in two implementations.
In the first implementation manner, the electronic device acquires the second image of the second road through interaction with the second camera device.
The electronic device transmits an image acquisition request to a second image pickup apparatus, which is an image pickup apparatus for picking up a road image of a second road. The second camera device receives the image acquisition request, and acquires a second image of a second road according to the image acquisition request. The second camera device sends a second image of the second road to the electronic device so that the electronic device obtains the second image of the second road.
In a second implementation manner, the electronic device stores a road image of the second road, and the second image of the second road is determined from the road image of the second road stored in the electronic device.
Optionally, the second camera device acquires the road image of the second road once every a reference time length, and the second camera device transmits the acquired road image of the second road and the acquired time of the road image of the second road to the electronic device every time the second camera device acquires the road image of the second road once, and the electronic device stores the road image of the second road and the acquired time of the road image of the second road. The electronic device instructs, based on the first recognition result, that the first image includes a vehicle waiting to pass through a signal lamp of the first road, and the electronic device takes, as a second image of the second road, a road image of the second road whose time interval is closest to the current time acquired from among the stored road images of the plurality of second roads. The current time may be the time when the first recognition result is obtained, or may be the time when the first image of the first road is obtained, which is not limited in the embodiment of the present application. The current time is, for example, a time when the first image of the first road is acquired, the acquisition time of the first image of the first road is stored in the electronic device, and the acquisition time of the first image of the first road is taken as the current time.
It should be noted that any implementation manner may be selected to obtain the second image of the second road, which is not limited in this embodiment of the present application.
In step 204, the second image is identified, and a second identification result is obtained.
In one possible implementation manner, the process of identifying the second image and obtaining the second identification result includes: determining a target area in the second image, wherein the target area is determined based on a second reference line, the second reference line is a stop line of a signal lamp of the second road, and when the color of the signal lamp of the second road is a first color, a line which cannot be crossed by a vehicle running on the second road is the second reference line; identifying the target area to obtain second content, wherein the second content comprises object information of objects included in the target area; based on the second content indicating that no vehicle is included in the target area, it is determined that the second recognition result indicates that no vehicle waiting to pass through a signal lamp of the second road is included in the second image. And determining that the second recognition result indicates that the second image includes a vehicle waiting to pass through a signal lamp of the second road based on the second content indicating that the vehicle is included in the target area.
Wherein determining the target region in the second image comprises: acquiring a second proportion of a second image, wherein the second proportion is a pixel proportion of a second shooting device for shooting the second image; and determining a target distance according to the second proportion and the second threshold value, wherein the distance between the target distance and the second reference line is the target distance, and the area which does not pass through the second reference line is taken as a target area. Optionally, the second threshold is set based on experience, or adjusted according to the implementation environment, which is not limited by the embodiments of the present application. The second threshold is, for example, 5 meters.
Optionally, the process of acquiring the second scale of the second image by the electronic device includes: the electronic equipment sends a second proportion acquisition request to the second camera device, wherein the second proportion acquisition request is used for acquiring the pixel proportion of the second camera device; the second camera device receives the second proportion acquisition request, determines a second proportion according to the second proportion acquisition request, and then sends the second proportion to the electronic equipment so that the electronic equipment acquires the second proportion. Illustratively, the second ratio is 1:10.
Alternatively, the electronic apparatus stores the pixel ratio of each image pickup device, the correspondence between each image pickup device, and the correspondence between each image pickup device and the road. The electronic equipment determines a second image pickup device corresponding to the second road according to the second road, further obtains the pixel proportion of the second image pickup device, and takes the pixel proportion of the second image pickup device as a second proportion.
The process of determining the target distance according to the second ratio and the second threshold is not limited in the embodiments of the present application. Illustratively, the target distance is determined according to the following equation (2) based on the second ratio and the second threshold.
Figure BDA0004020888330000111
In the above-mentioned formula (2),
Figure BDA0004020888330000112
for the second ratio, D is a second threshold and C is the target distance.
Illustratively, the second ratio is
Figure BDA0004020888330000113
And if the second threshold value is 10 meters, determining that the target distance is 1 meter according to the formula (2).
In one possible implementation, after determining the target distance, the distance between the target distance and the second reference line is the target distance, and the area that does not pass through the second reference line is taken as the target area. After the target area is determined, the target area is identified, and second content is obtained, wherein the second content comprises object information of objects included in the target area. Optionally, the process of identifying the target area to obtain the second content is similar to the process of identifying the first image to obtain the first content, which is not described herein.
Optionally, the second image may be further identified by the following manner, to obtain a second identification result, where the process includes: identifying the second image to obtain third content, wherein the third content is object information of an object included in the second image; determining an image distance between the second vehicle and the second reference line in the second image based on the second content indicating that the second vehicle is included in the second image and that a relationship between the second vehicle and the second reference line meets a relationship requirement; acquiring a second proportion of a second image, and converting the image distance between the second vehicle and a second reference line according to the second proportion to obtain the actual distance from the second vehicle to the second reference line; based on the actual distance between the second vehicle and the second reference line being greater than a third threshold, it is determined that the second recognition result indicates that the vehicle waiting to pass through the signal light of the second road is not included in the second image. Based on the actual distance of the second vehicle to the second reference line not being greater than a third threshold, it is determined that the second recognition result indicates a vehicle in the second image that includes a signal light waiting to pass through the second road. Wherein the meeting of the relation requirement between the second vehicle and the second reference line means that the head of the second vehicle faces the second reference line and the second vehicle does not pass through the second reference line. The third threshold is set empirically or adjusted according to the implementation environment, which is not limited by the embodiment of the present application. The third threshold is, for example, 10 meters.
It should be noted that, according to the second ratio, the process of converting the image distance between the second vehicle and the second reference line to obtain the actual distance between the second vehicle and the second reference line is similar to the process of converting the image distance between the first vehicle and the first reference line to obtain the actual distance between the first vehicle and the first reference line according to the first ratio, and will not be described herein.
In step 205, the second image is indicated to not include a vehicle waiting to pass through the traffic lights of the second road based on the second recognition result, and the color of the traffic lights of the first road is controlled to be changed to the second color.
In one possible implementation, if the second image includes no vehicle waiting to pass through the traffic light of the second road based on the second identification result, the color of the traffic light of the first road is controlled to be changed to the second color. In order to ensure the safety of the vehicle passing, the color of the signal lamp of the second road can be controlled to be changed into the first color. That is, the color of the signal lamp of the first road is changed from red to green, and the color of the signal lamp of the second road is changed from green to red. Therefore, vehicles on the first road can smoothly pass through the signal lamp of the first road, and the situation of vehicle congestion can not occur.
Fig. 4 is a schematic diagram of a signal lamp control according to an embodiment of the present application. In fig. 4 (1), the color of the signal lamp of the first road is the first color, the color of the signal lamp of the second road is the second color, and there is a vehicle waiting to pass through the signal lamp of the first road in the first road, and there is no vehicle waiting to pass through the signal lamp of the second road in the target area 401 of the second road, and therefore, the color of the signal lamp of the first road is changed to the second color, and the color of the signal lamp of the second road is changed to the first color. Fig. 4 (2) is a schematic diagram after signal lamp control.
Indicating that the second image includes a vehicle waiting to pass the signal light of the second road based on the second recognition result, there is no need to control the color of the signal light of the first road, nor the color of the signal light of the second road. And controlling the color of the signal lamp of the first road and the color of the signal lamp of the second road when the identification result of the road image of the second road indicates that the second road does not comprise vehicles waiting to pass through the signal lamp of the second road and the color of the signal lamp of the first road is the first color and the color of the signal lamp of the second road is the second color.
According to the method, when the vehicles waiting for the signal lamp of the first road exist on the first road, but the color of the signal lamp of the first road is the first color, and the vehicles waiting for the signal lamp of the second road do not exist on the second road, but the color of the signal lamp of the second road is the second color, the colors of the signal lamps of the first road are adjusted, so that the vehicles of the first road can pass quickly, the passing time of the vehicles of the first road is shortened, and the passing efficiency of the vehicles of the first road is improved.
Fig. 5 is a flowchart of a method for controlling a signal lamp according to an embodiment of the present application, as shown in fig. 5, where the method includes:
step 501, acquiring a first image of a first road.
The color of the signal lamp of the first road is a first color, and the first color is used for indicating that the first road is not passable. The process of acquiring the first image of the first road is described in the above step 201, and will not be described here again.
Step 502, identifying the first image to obtain a first identification result. This process is described in step 202 above and will not be described in detail here.
Step 503, determining whether the first recognition result indicates that the first image includes a vehicle waiting to pass through a signal light of the first road.
Step 504, indicating that the first image includes a vehicle waiting to pass through a signal light of the first road based on the first recognition result, and acquiring a second image of the second road. This process is already described in step 203 above and will not be described in detail here.
Step 505, identifying the second image to obtain a second identification result, where the process is described in step 204, and will not be described herein.
Step 506, determining whether the second recognition result indicates that the second image includes a vehicle waiting to pass through a signal light of the second road.
Step 507, indicating that the second image does not include a vehicle waiting to pass through the signal lamp of the second road based on the second recognition result, controlling the color of the signal lamp of the first road to become the second color, and the color of the signal lamp of the second road to become the first color.
The first image is indicated to not include a vehicle waiting for a signal to pass through the first road based on the first recognition result, or the first image is indicated to include a vehicle waiting for a signal to pass through the first road based on the first recognition result, but the second image is indicated to include a vehicle waiting for a signal to pass through the second road based on the second recognition result, and no adjustment is required to be made to the color of the signal of the first road, nor to the color of the signal of the second road.
Fig. 6 is a schematic structural diagram of a control device for a signal lamp according to an embodiment of the present application, where, as shown in fig. 6, the device includes:
the acquiring module 601 is configured to acquire a first image of a first road, where a color of a signal lamp of the first road is a first color, and the first color is used to indicate that the first road is not passable;
the recognition module 602 is configured to recognize the first image to obtain a first recognition result;
the obtaining module 601 is further configured to indicate, based on the first recognition result, that the first image includes a vehicle waiting for passing through a signal lamp of the first road, obtain a second image of a second road, where the second road and the first road are intersecting roads, and the signal lamp of the second road has a second color, where the second color is used to indicate that the second road is passable;
the recognition module 602 is further configured to recognize the second image to obtain a second recognition result;
the control module 603 is configured to instruct, based on the second recognition result, that the second image does not include a vehicle waiting to pass through the traffic lights of the second road, and control the color of the traffic lights of the first road to change to the second color.
In a possible implementation manner, the identifying module 602 is configured to identify the first image to obtain first content, where the first content includes object information of an object included in the first image; determining an actual distance from a first vehicle to a first reference line, which is a stop line of a signal lamp of a first road, based on the first content indication first image including the first vehicle; based on the actual distance of the first vehicle from the first reference line being less than a first threshold, it is determined that the first recognition result indicates that the first image includes a vehicle waiting to pass through a signal light of the first road.
In one possible implementation, the identifying module 602 is configured to obtain a first proportion of the first image, where the first proportion is a pixel proportion of a first image capturing device that captures the first image; determining an image distance of the first vehicle to a first reference line in the first image; according to the first proportion, the image distance is converted, and the actual distance from the first vehicle to the first reference line is obtained.
In one possible implementation, the actual distance of the first vehicle from the first reference line is the actual distance of the head of the first vehicle from the first reference line; alternatively, the actual distance of the first vehicle to the first reference line is the actual distance of the tail of the first vehicle to the first reference line.
In a possible implementation manner, the identifying module 602 is configured to determine a target area in the second image, where the target area is determined based on a second reference line, and the second reference line is a stop line of a signal lamp of the second road; identifying the target area to obtain second content, wherein the second content comprises object information of objects included in the target area; based on the second content indicating that no vehicle is included in the target area, it is determined that the second recognition result indicates that no vehicle waiting to pass through a signal lamp of the second road is included in the second image.
In a possible implementation manner, the identifying module 602 is configured to obtain a second proportion of the second image, where the second proportion is a pixel proportion of a second image capturing device that captures the second image; determining a target distance according to the second proportion and a second threshold value; and taking the distance between the first reference line and the second reference line as a target distance, and taking the area which does not pass through the second reference line as a target area.
In a possible implementation, the control module 603 is further configured to control the color of the signal light of the second road to change to the first color based on the second identification result indicating that the second image does not include a vehicle waiting to pass the signal light of the second road.
According to the device, when the vehicles waiting for the signal lamp of the first road exist on the first road, but the color of the signal lamp of the first road is the first color, and the vehicles waiting for the signal lamp of the second road do not exist on the second road, but the color of the signal lamp of the second road is the second color, the colors of the signal lamps of the first road are adjusted, so that the vehicles of the first road can pass quickly, the passing time of the vehicles of the first road is shortened, and the passing efficiency of the vehicles of the first road is improved.
It should be understood that, in implementing the functions of the apparatus provided above, only the division of the above functional modules is illustrated, and in practical application, the above functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 7 shows a block diagram of a terminal device 700 according to an exemplary embodiment of the present application. The terminal device 700 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. The terminal device 700 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal device 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of content that the display screen is required to display. In some embodiments, the processor 701 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the method of controlling a signal light provided by the method embodiments herein.
In some embodiments, the terminal device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display 705, a camera assembly 706, audio circuitry 707, a positioning assembly 708, and a power supply 709.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminal devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one and disposed on the front panel of the terminal device 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the terminal device 700 or in a folded design; in other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the terminal device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. Typically, a front camera is provided at the front panel of the terminal device 700, and a rear camera is provided at the rear surface of the terminal device 700. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For stereo acquisition or noise reduction purposes, a plurality of microphones may be respectively disposed at different portions of the terminal device 700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The positioning component 708 is operative to position the current geographic location of the terminal device 700 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 708 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
The power supply 709 is used to power the various components in the terminal device 700. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal device 700 further includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyroscope sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal apparatus 700. For example, the acceleration sensor 711 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 711. The acceleration sensor 711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal device 700, and the gyro sensor 712 may collect a 3D motion of the user to the terminal device 700 in cooperation with the acceleration sensor 711. The processor 701 may implement the following functions based on the data collected by the gyro sensor 712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 713 may be disposed at a side frame of the terminal device 700 and/or at a lower layer of the display screen 705. When the pressure sensor 713 is provided at a side frame of the terminal device 700, a grip signal of the user to the terminal device 700 may be detected, and the processor 701 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at the lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 714 is used to collect a fingerprint of the user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 714 may be provided on the front, back or side of the terminal device 700. When a physical key or vendor Logo is provided on the terminal device 700, the fingerprint sensor 714 may be integrated with the physical key or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically provided on the front panel of the terminal device 700. The proximity sensor 716 is used to collect the distance between the user and the front face of the terminal device 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front face of the terminal device 700 gradually decreases, the processor 701 controls the display 705 to switch from the bright screen state to the off screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal device 700 gradually increases, the processor 701 controls the display screen 705 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is not limiting of the terminal device 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 800 may include one or more processors (Central Processing Units, CPU) 801 and one or more memories 802, where at least one program code is stored in the one or more memories 802, and the at least one program code is loaded and executed by the one or more processors 801 to implement the signal lamp control method provided in the above method embodiments. Of course, the server 800 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one program code loaded and executed by a processor to cause a computer to implement a method of controlling any one of the above-described signal lamps.
Alternatively, the above-mentioned computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Read-Only optical disk (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program or a computer program product is also provided, in which at least one computer instruction is stored, which is loaded and executed by a processor, to cause the computer to implement a method of controlling any one of the above-mentioned signal lamps.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, reference herein to both the first image and the second image is taken with sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to any modification, equivalents, or improvements made within the principles of the present application.

Claims (10)

1. A method of controlling a signal lamp, the method comprising:
acquiring a first image of a first road, wherein the color of a signal lamp of the first road is a first color, and the first color is used for indicating that the first road cannot pass;
identifying the first image to obtain a first identification result;
the first image is indicated to comprise vehicles waiting to pass through signal lamps of the first road based on the first identification result, a second image of a second road is obtained, the second road and the first road are crossed roads, the signal lamps of the second road are of a second color, and the second color is used for indicating that the second road is passable;
identifying the second image to obtain a second identification result;
and controlling the color of the signal lamp of the first road to become the second color based on the second recognition result indicating that the second image does not include the vehicle waiting to pass the signal lamp of the second road.
2. The method of claim 1, wherein the identifying the first image to obtain a first identification result comprises:
identifying the first image to obtain first content, wherein the first content comprises object information of an object included in the first image;
determining an actual distance from a first vehicle to a first reference line based on the first content indication that the first image comprises the first vehicle, wherein the first reference line is a stop line of a signal lamp of the first road;
based on the actual distance of the first vehicle from the first reference line being less than a first threshold, it is determined that the first recognition result indicates that the first image includes a vehicle waiting to pass a signal light of the first road.
3. The method of claim 2, wherein the determining the actual distance of the first vehicle to a first reference line comprises:
acquiring a first proportion of the first image, wherein the first proportion is a pixel proportion of a first image pickup device for picking up the first image;
determining an image distance of the first vehicle from the first reference line in the first image;
and converting the image distance according to the first proportion to obtain the actual distance from the first vehicle to the first reference line.
4. The method of claim 2, wherein the actual distance of the first vehicle to the first reference line is the actual distance of the head of the first vehicle to the first reference line;
or, the actual distance from the first vehicle to the first reference line is the actual distance from the tail of the first vehicle to the first reference line.
5. The method according to any one of claims 1 to 4, wherein the identifying the second image to obtain a second identification result includes:
determining a target area in the second image, wherein the target area is determined based on a second reference line, and the second reference line is a stop line of a signal lamp of the second road;
identifying the target area to obtain second content, wherein the second content comprises object information of objects included in the target area;
and determining that the second identification result indicates that the second image does not include vehicles waiting to pass through the signal lamp of the second road based on the second content indicating that the vehicles are not included in the target area.
6. The method of claim 5, wherein the determining the target region in the second image comprises:
Acquiring a second proportion of the second image, wherein the second proportion is a pixel proportion of a second image pickup device for picking up the second image;
determining a target distance according to the second proportion and a second threshold value;
and taking the area which is the target distance and does not pass through the second reference line as the target area.
7. The method according to any one of claims 1 to 4, further comprising:
and controlling the color of the signal lamp of the second road to become the first color based on the second recognition result indicating that the second image does not include the vehicle waiting to pass the signal lamp of the second road.
8. A control device for a signal lamp, the device comprising:
the system comprises an acquisition module, a first road detection module and a second road detection module, wherein the acquisition module is used for acquiring a first image of a first road, the color of a signal lamp of the first road is a first color, and the first color is used for indicating that the first road cannot pass;
the identification module is used for identifying the first image to obtain a first identification result;
the acquiring module is further configured to instruct, based on the first recognition result, that the first image includes a vehicle waiting for passing through a signal lamp of the first road, acquire a second image of a second road, where the second road and the first road are intersecting roads, and a color of the signal lamp of the second road is a second color, where the second color is used to instruct that the second road is passable;
The identification module is further used for identifying the second image to obtain a second identification result;
and a control module for controlling the color of the signal lamp of the first road to be changed to the second color based on the second recognition result indicating that the second image does not include the vehicle waiting to pass the signal lamp of the second road.
9. A computer device, characterized in that it comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor, in order to carry out the method for controlling a signal lamp according to any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that at least one program code is stored in the computer readable storage medium, which is loaded and executed by a processor, to cause a computer device to implement the method of controlling a traffic light according to any one of claims 1 to 7.
CN202211689956.8A 2022-12-27 2022-12-27 Signal lamp control method, device, equipment and computer readable storage medium Pending CN116311976A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211689956.8A CN116311976A (en) 2022-12-27 2022-12-27 Signal lamp control method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211689956.8A CN116311976A (en) 2022-12-27 2022-12-27 Signal lamp control method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116311976A true CN116311976A (en) 2023-06-23

Family

ID=86798502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211689956.8A Pending CN116311976A (en) 2022-12-27 2022-12-27 Signal lamp control method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116311976A (en)

Similar Documents

Publication Publication Date Title
CN110095128B (en) Method, device, equipment and storage medium for acquiring missing road information
CN111854780B (en) Vehicle navigation method, device, vehicle, electronic equipment and storage medium
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN111010537B (en) Vehicle control method, device, terminal and storage medium
CN109977570B (en) Vehicle body noise determination method, device and storage medium
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN111754564B (en) Video display method, device, equipment and storage medium
CN111223311B (en) Traffic flow control method, device, system, control equipment and storage medium
CN112818243B (en) Recommendation method, device, equipment and storage medium of navigation route
CN113205069B (en) False license plate detection method and device and computer storage medium
CN112365088B (en) Method, device and equipment for determining travel key points and readable storage medium
CN114789734A (en) Perception information compensation method, device, vehicle, storage medium, and program
CN111717205B (en) Vehicle control method, device, electronic equipment and computer readable storage medium
CN111294513B (en) Photographing method and device, electronic equipment and storage medium
CN112990424B (en) Neural network model training method and device
CN112699906B (en) Method, device and storage medium for acquiring training data
CN113920222A (en) Method, device and equipment for acquiring map building data and readable storage medium
CN116311976A (en) Signal lamp control method, device, equipment and computer readable storage medium
CN112863168A (en) Traffic grooming method and device, electronic equipment and medium
CN113734199B (en) Vehicle control method, device, terminal and storage medium
CN114566064B (en) Method, device, equipment and storage medium for determining position of parking space
CN113590877B (en) Method and device for acquiring annotation data
CN113734167B (en) Vehicle control method, device, terminal and storage medium
CN111135571B (en) Game identification method, game identification device, terminal, server and readable storage medium
CN111984755B (en) Method and device for determining target parking spot, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination