WO2022199311A1 - 交互方法和交互装置 - Google Patents
交互方法和交互装置 Download PDFInfo
- Publication number
- WO2022199311A1 WO2022199311A1 PCT/CN2022/077520 CN2022077520W WO2022199311A1 WO 2022199311 A1 WO2022199311 A1 WO 2022199311A1 CN 2022077520 W CN2022077520 W CN 2022077520W WO 2022199311 A1 WO2022199311 A1 WO 2022199311A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- road
- target
- road condition
- passable
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 87
- 230000003993 interaction Effects 0.000 title claims abstract description 34
- 238000009877 rendering Methods 0.000 claims abstract description 10
- 230000004044 response Effects 0.000 claims description 65
- 238000004590 computer program Methods 0.000 claims description 20
- 230000001960 triggered effect Effects 0.000 claims description 10
- 230000002452 interceptive effect Effects 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3667—Display of a road map
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3691—Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions
Definitions
- the present invention relates to the field of computer technology, and in particular, to an interaction method and an interaction device.
- the purpose of the embodiments of the present invention is to provide an interaction method and an interaction device, which are used to determine the position of each target object relative to the lane line of the corresponding lane of the corresponding road segment according to the road condition collection sequence collected in each road segment.
- the road condition information of the road section, and the road condition information of each road section can be reflected in a timely and accurate manner by displaying the road condition collection image display method, so that the user can avoid the congested road section in time.
- an interaction method comprising:
- the road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road section, and the position of the target object is used to represent the target object relative to the corresponding lane. the location of the lane lines;
- a target image corresponding to a target road section is determined and sent, where the target road section is a road section whose road condition information in the route navigation information satisfies a predetermined road condition condition.
- an interaction method comprising:
- the target image is determined based on pre-uploaded route navigation information
- the road condition display control is used to display the target image
- the target road segment is a road segment in which the road condition information in the route navigation information satisfies a predetermined road condition condition
- the The road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road section in the route navigation information, and the position of the target object is used to represent the position of the target object relative to the lane line of the corresponding lane.
- an interaction apparatus comprising:
- a navigation information acquisition unit for acquiring route navigation information
- a road condition information determination unit configured to determine the road condition information of each road section in the route navigation information, the road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road section, and the position of the target object is used to represent the The position of the target object relative to the lane line of the corresponding lane;
- An image sending unit configured to determine and send a target image corresponding to a target road section, where the target road section is a road section whose road condition information in the route navigation information satisfies a predetermined road condition condition.
- an interaction apparatus comprising:
- a control display unit used for rendering and displaying the road condition display control on the navigation page in response to receiving the target image corresponding to the target road segment;
- the target image is determined based on pre-uploaded route navigation information
- the road condition display control is used to display the target image
- the target road segment is a road segment in which the road condition information in the route navigation information satisfies a predetermined road condition condition
- the The road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road section in the route navigation information, and the position of the target object is used to represent the position of the target object relative to the lane line of the corresponding lane.
- a computer-readable storage medium on which computer program instructions are stored, wherein the computer program instructions, when executed by a processor, implement any one of the first aspect or the second aspect. one of the methods described.
- an electronic device including a memory and a processor, wherein the memory is used to store one or more computer program instructions, wherein the one or more computer program instructions are The processor executes to implement the method of any one of the first aspect or the second aspect.
- a computer program product comprising a computer program/instruction, wherein the computer program/instruction is executed by a processor to implement any one of the first aspect or the second aspect Methods.
- the server in the embodiment of the present invention determines the road condition information of each road segment according to the position of the target object relative to the corresponding lane of the corresponding road segment in the road condition collection sequence of each road segment in the previously acquired route navigation information, And after determining the target image corresponding to the road segment in the road segment whose road condition information meets the predetermined road condition condition, the target image is sent to the terminal.
- the terminal may render and display a road condition display control for displaying the target image on the navigation page.
- the embodiment of the present invention can accurately determine the position of each target object through image recognition, and determine the road condition information of each road section according to the position of each target object, and at the same time display the road condition of a specific road section through a real-life image, which improves the accuracy of determining the road condition. and timeliness, so that users can avoid congested road sections in time.
- FIG. 1 is a schematic diagram of a hardware system architecture according to an embodiment of the present invention.
- FIG. 2 is a flow chart of the interaction method according to the first embodiment of the present invention.
- FIG. 3 is a flowchart of determining road condition information of each road segment in an optional implementation manner of the first embodiment of the present invention
- FIG. 4 is a schematic diagram of the position of a target object according to an embodiment of the present invention.
- FIG. 5 is a flowchart of determining the congestion state of the first road section in an optional implementation manner of the first embodiment of the present invention
- FIG. 6 is another schematic diagram of the position of a target object according to an embodiment of the present invention.
- FIG. 7 is a flowchart of the interaction method on the server side according to the first embodiment of the present invention.
- FIG. 9 is a flowchart of an interaction method according to a second embodiment of the present invention.
- FIG. 10 is a schematic diagram of an interface according to an embodiment of the present invention.
- FIG. 11 is another interface schematic diagram of an embodiment of the present invention.
- FIG. 14 is a schematic diagram of an interaction system according to a third embodiment of the present invention.
- FIG. 15 is a schematic diagram of an electronic device according to a fourth embodiment of the present invention.
- FIG. 1 is a schematic diagram of a hardware system architecture according to an embodiment of the present invention.
- the hardware system architecture shown in FIG. 1 may include at least one image acquisition device 11 , at least one platform-side server (hereinafter referred to as server) 12 and at least one user terminal 13 .
- server platform-side server
- One user terminal 13 will be described as an example.
- the image acquisition device 11 is an image acquisition device with a positioning function installed on the driver's side, which can record the road condition acquisition sequence of the road segment traveled by during the driving of the vehicle and send the recorded road condition acquisition sequence and recorded road conditions to the server 12 after the user's authorization. The position when the sequence was acquired.
- the image acquisition device 11 may specifically be an image acquisition device that is fixedly installed inside the vehicle (that is, the target device, not shown in the figure), such as a driving recorder, or an image that is additionally set and whose relative position to the corresponding vehicle is kept fixed.
- a collection device such as a mobile terminal with a camera function carried when driving or riding a vehicle, including a mobile phone, a tablet computer, a notebook computer, etc., or a camera.
- the image capturing device 11 can be connected to the server 12 and the user terminal 13 for communication through a network.
- the image capturing apparatus 11 may also be disposed on other movable or non-movable devices, such as a movable robot and the like.
- the server 12 can collect the target object relative to the lane line of the corresponding lane according to the road condition collection image in the road condition collection sequence uploaded by the image collection device 11.
- the position determines the road segment information of each road segment in the route navigation information, and then determines the target image and/or the target image sequence including the target image corresponding to the road segment in the route navigation information for which the road condition information meets the predetermined road condition condition, and sends the target image to the user terminal 13. image.
- the user terminal 13 may render and display a road condition display control for displaying the target image on the navigation page.
- the user terminal 13 may also receive the target image sequence sent by the server 12, and in response to the road condition display control being triggered, display a video playback page, and play the target image sequence through the video playback page.
- FIG. 2 is a flowchart of the interaction method according to the first embodiment of the present invention. As shown in Figure 1, the method of this embodiment includes the following steps:
- Step S201 obtaining route navigation information.
- a user can log in to a predetermined application program with a navigation function through a user terminal (hereinafter referred to as a terminal), and set a departure point and a destination.
- the terminal can perform route planning according to the departure point and destination set by the user to obtain at least one route planning result, and determine the route planning result selected by the user as route navigation information.
- the terminal can obtain the path planning result through various existing methods, for example, sending the set departure point and destination to the predetermined path planning interface, and obtaining the path planning result from the predetermined path planning interface. This embodiment does not Make specific restrictions.
- the terminal can also send the route navigation information to the server, so that the server can store the route navigation information in the database.
- the server can obtain the route navigation information pre-uploaded by the terminal from the database; if the route navigation information selected by the user is not stored in the database As a result of the route planning, the server can receive the route navigation information sent by the terminal. After acquiring the route navigation information, the server may extract the road segment names of each road segment in the route navigation information.
- Step S202 determining the road condition information of each road segment in the route navigation information.
- the road segment information of each road segment is determined by the server according to the position of the target object in the road condition collection sequence.
- the road condition collection sequence is the image sequence of the road sections that each vehicle has recorded during the driving process.
- the image acquisition device configured for each vehicle can upload at least one road condition acquisition sequence, and also upload and record the position of the vehicle when the road condition acquisition sequence is collected.
- the position of the vehicle may be determined by a positioning system (eg, global positioning system, Beidou satellite navigation system, etc.) configured by the corresponding image acquisition device, and may specifically be the coordinates of the vehicle in the world coordinate system.
- FIG. 3 is a flowchart of determining road condition information of each road segment in an optional implementation manner of the first embodiment of the present invention.
- the server may determine the road condition information of each road segment through the following steps:
- Step S301 determining the road section to be determined.
- the server may determine each road segment within a predetermined geographic range (for example, a predetermined city, a predetermined district/county, etc.) as the road segment to be determined, and may also determine each road segment in the route navigation information as the road segment to be determined. , which is not specifically limited in this embodiment.
- a predetermined geographic range for example, a predetermined city, a predetermined district/county, etc.
- Step S302 image recognition is performed on each road condition collected image in the image sequence to be identified, and the position of the target object in each road condition collected image is determined.
- the road condition collection sequence is collected by an image collection device that moves with the vehicle, so in this embodiment, the target object is a vehicle.
- the target object can also be other objects, such as pedestrians, obstacles set in the road, and the like.
- the server can perform image recognition on the collected images of each road condition in each road condition collection sequence through various existing methods. For example, through “Research on Vehicle Distance Detection Algorithm Based on Image Recognition, Yin Yijie, 2012 The method described in "Master's Thesis” determines the distance of each target object relative to the image acquisition device, and determines the coordinates of each target object in the world coordinate system corresponding to each road condition acquisition image according to the position of the vehicle when recording each road condition acquisition image as the target. the location of the object.
- the position of the target object is used to determine the road condition information of the road section, so the position of the target object can be used to represent the lane line of the target object relative to the corresponding lane in the road section to be determined (that is, the lane where the target object is located)
- the lane line here may be the left lane line of the corresponding lane or the right lane line of the corresponding lane, which is not specifically limited in this embodiment.
- the server can also perform image recognition through various existing methods, such as “Design and Implementation of Auxiliary Positioning System Based on Image Recognition,” Wu Jiashun, 2018 The method described in “Master's Thesis”, or based on the trained SSD (Single Shot MultiBox Detector, single excitation multi-box detector) model to determine the position of each lane line, and according to the coordinates of each target object in the world coordinate system And the position of each lane line determines the position of each target object in each road condition collected image relative to the lane line of the corresponding lane in the road section to be determined.
- SSD Single Shot MultiBox Detector, single excitation multi-box detector
- FIG. 4 is a schematic diagram of the position of a target object according to an embodiment of the present invention.
- the vehicle V1 is a target object in the road condition collection image P1
- the lane line L1 and the lane line L2 are the left and right lane lines of the corresponding lane of the vehicle V1, respectively.
- the server determines the positions of the vehicle V1, the lane line L1 and the lane line L2 by performing image recognition on the road condition collected image P1, the server can determine the position of the vehicle V1 relative to the lane line L1, that is, the shortest distance between the vehicle V1 and the lane line L1.
- the distance d1 and the position of the vehicle V2 relative to the lane line L2, that is, the shortest distance d2 between the vehicle V1 and the lane line L2, are taken as the position of the vehicle V1.
- Step S303 determining the passable state of the lane corresponding to the target object according to the position of the target object.
- the congestion state of the road segment to be determined is determined by whether each lane in the road segment to be determined is passable. Therefore, in this step, it can be determined according to the The position of each target object determines the passable state of the lane corresponding to the target object.
- the server may determine the target distance corresponding to the target object according to the position of the target object.
- the target distance is used to characterize the maximum distance between the target object and the lane line of the corresponding lane. Taking the position of the target object shown in FIG. 4 as an example for illustration, the server can determine the larger distance between the shortest distance d1 between the vehicle V1 and the lane line L1 and the shortest distance d2 between the vehicle V1 and the lane line L2 (also That is, the shortest distance d2) is the target distance corresponding to the vehicle V1.
- the server can also obtain the travelable distance corresponding to the target device.
- the passable distance corresponding to the target device is equivalent to the width of the vehicle (ie, the distance between the planes parallel to the longitudinal symmetry plane of the vehicle and abutting against the fixed protrusions on both sides of the vehicle).
- vehicles of the same type usually have almost the same width, so the server can determine the travelable distance corresponding to the vehicle according to the type of the vehicle.
- the width of the ordinary car is usually between 1.4 and 1.8 meters, so the server can use 1.8 meters as the passable distance of the ordinary car.
- the server may determine whether each lane is passable according to the target distance corresponding to the target object and the passable distance of the target device. For any lane, if the target distance corresponding to the target object is greater than (or greater than or equal to) the passable distance of the target device, the server can determine that the passable state of the lane is passable; if the target distance corresponding to the target object is less than the passable distance of the target device passable distance, the server can determine that the passable state of the lane is impassable.
- the server can determine that the passable state of the lane is passable.
- the server determines the target distance corresponding to the vehicle V1 (that is, the shortest distance d2) and the passable distance of the target device (for example, 1.8 meters), if the shortest If the distance d2 ⁇ 1.8 meters, the server can determine that the passable state of the lane corresponding to vehicle V1 is passable; if the shortest distance d2 ⁇ 1.8 meters, the server can determine that the passable state of the lane corresponding to vehicle V1 is impassable.
- the target distance corresponding to the vehicle V1 that is, the shortest distance d2
- the passable distance of the target device for example, 1.8 meters
- Step S304 Determine the congestion state of the first road section corresponding to the image sequence to be recognized according to the passable state of each lane.
- the congestion state of the first road segment is used to represent the congestion state of the road segment to be determined when the corresponding vehicle is traveling on the road segment to be determined.
- FIG. 5 is a flowchart of determining the congestion state of the first road segment in an optional implementation manner of the first embodiment of the present invention. As shown in FIG. 5 , in an optional implementation manner of this embodiment, step S304 may include the following steps:
- Step S501 Determine the corresponding congestion state of the second road section according to the passable state of each lane corresponding to the collected images of each road condition.
- the server may determine, according to the lane-passable state of each lane, the image acquisition device of the target device, when recording the to-be-recognized image, the road section to be determined.
- the second road section is congested.
- the server may determine that the congestion state of the second road section corresponding to the image to be recognized is congested; when the passable state of each lane is passable, the server may determine The congestion state of the second road section corresponding to the image to be recognized is unblocked; when the passable state of at least one lane is impassable and the passable state of at least one lane is passable, the server may determine the second road section corresponding to the image to be recognized The congestion state is slow.
- FIG. 6 is another schematic diagram of the position of a target object according to an embodiment of the present invention.
- the road section to be determined includes a lane 61 , a lane 62 and a lane 63 .
- the server determines the passable state (ie, impassable) of the lane 61, the passable state (ie, passable) of the lane 62, and the passable state (ie, passable) of the lane 63 according to the image recognition of the image to be identified P2 (ie, passable) ), it can be determined that the congestion state of the second road section corresponding to the image P2 to be recognized is slow.
- Step S502 determining the congestion state of the first road segment according to the congestion state of each second road segment.
- the server may determine the first road section congestion state corresponding to the road condition collection sequence according to the second road section congestion state corresponding to each road condition collection image in the same road condition collection sequence.
- the frequency of capturing images of road conditions recorded by the image acquisition device is usually high. If the number of images (sorted by recording time) with the same continuous congestion state of the second road section in the road condition acquisition sequence is less than a certain number, for example, the road condition acquisition sequence There are 100 collected images of road conditions, but the number of collected images of road conditions where the congestion state of the second road segment is continuously congested is less than 30, then during the movement of the target device, the road condition information of the road segment to be determined may not actually reach the level of congestion. .
- the server may determine the first road segment congestion state corresponding to the road condition collection sequence as the second road segment congestion state. Congestion of the road section.
- the server may determine that the congestion state of the first road section corresponding to the road condition collection sequence is congestion in response to the congestion state of the second road section corresponding to the collection of images of consecutive road conditions; the second road section corresponding to the collection of images of consecutive road conditions
- the congestion state is slow-moving, and it is determined that the congestion state of the first road section corresponding to the road condition collection sequence is slow-moving; in response to the congestion state of the second road section corresponding to the consecutive multiple road condition collection images, the congestion state of the first road section corresponding to the road condition collection sequence is determined to be unblocked .
- Step S305 determining the road condition information of the road segment to be determined according to the congestion state of the first road segment.
- a road condition acquisition sequence is recorded by an image acquisition device set by a target device, so it is relatively one-sided.
- the actual road condition information of the road section to be determined is congestion, but the lane in which the target vehicle (that is, the target device) is driving is the emergency lane, so the congestion state of the first road section determined by image recognition may be slow travel, which is different from the actual road section to be determined.
- the road condition information does not match and the accuracy is low.
- the road condition of the road segment to be determined is determined by the congestion state of the first road segment corresponding to the road condition collection sequence obtained from the records of multiple vehicles driving on the road segment to be determined in the same time period. information to improve the accuracy of determining road condition information.
- the server may acquire the congestion state of the first road section corresponding to each image sequence to be identified in the image sequence set.
- the to-be-identified image sequence in the image sequence set is a road condition collection sequence recorded by a plurality of vehicles in the same time period on the road segment to be determined.
- the road condition information changes from time to time, so the period length of the predetermined period can be determined according to the change rule of the road condition information in the historical data.
- the change rule of the road condition information obtained from the historical data is roughly an hourly change (for example, from congestion to slow driving) , the period length of the predetermined period may be 1 hour.
- the road segment name of the road segment to be determined is "xx street”
- the reservation period is March 5, 2021 10:00-11:00
- the server can obtain multiple vehicles on March 5, 2021 10:00-11: 00
- the recorded road condition collection sequence when driving on xx street is taken as multiple road condition collection sequences corresponding to the road section of "xx street”.
- the server may determine the number of image sequences to be recognized whose congestion state of the first road segment in the image sequence set is unblocked (ie, the first number), the number of image sequences in the image sequence set the number of image sequences to be identified (that is, the second number) in which the congestion state of the first road segment is slow-moving, and the number of image sequences to be recognized (that is, the third number) in which the congestion state of the first road segment is congested in the image sequence set,
- the congestion state of the first road section corresponding to the road condition collection sequence with the largest number is determined as the road condition information of the road section to be determined.
- the road condition information of the road segment to be determined is determined to be clear; when the second number is greater than the first number and the second number is greater than the third number, The road condition information of the road segment to be determined is determined to be slow; when the third number is greater than the first number and the third number is greater than the second number, the road condition information of the road segment to be determined is determined to be congestion.
- step S301-step S305 the process of determining the road condition information of the road segment may occur in the In the previous cycle of the cycle in which the route navigation information is received, that is, the server determines the road condition information of each road segment in the route navigation information of the current cycle according to the road condition information determined in the previous cycle.
- the cycle length of the predetermined cycle is 1 hour
- the time when the server receives the route navigation information sent by the terminal is 9:30
- the cycle is 9:00-10:00
- the road condition information of each road section in the route navigation information is Determined within 8:00-9:00.
- the server may also determine the number of target objects in each road condition collected image by performing image recognition on each road condition collected image. Then, it is determined whether the number of target objects in multiple consecutive road condition acquisition images meets the predetermined number condition for each road condition acquisition image corresponding to the same road condition acquisition sequence. If the predetermined number condition is satisfied, the server can determine the congestion status of the road section corresponding to the road condition acquisition sequence. is the congestion state corresponding to this number. For the road section to be determined, the server may determine the road section congestion state corresponding to the largest number of road condition collection sequences as the road condition information of the to-be-determined road section.
- the corresponding relationship between the number of target objects and the congestion state can be preset. For example, if the number of target objects is 0 to 3, the congestion state can be unblocked; the number of target objects is 4 to 10, and the congestion state can be Slow travel; if the number of target objects is 11 or more, the congestion state can be congestion.
- the above two methods can also be combined to determine the road condition information of each road section to be determined, so as to further improve the accuracy of determining the road condition information.
- the process of determining the road condition information may also occur at the terminal side, that is, steps S301 to S305 may also be performed by the terminal.
- Step S203 Determine and send a target image corresponding to the target road segment.
- the server may determine the road segment whose road condition information meets the predetermined road condition condition as the target road segment, and determine the target image from the road condition acquisition images of multiple road condition acquisition sequences.
- the predetermined road condition condition is used to determine whether the road condition information of each road segment is suitable for passing, so it can be set that the road condition information is congestion.
- the road condition information is slow travel.
- the target image may be determined according to at least one factor among the clarity of each road condition collected image and the number of target objects.
- the server may collect the image from each road condition with the highest definition and the target object. The road condition acquisition image with the largest number of objects in the first place is determined as the target image.
- the server can also remove sensitive information in the target image.
- the server can remove sensitive information in the target image through various existing methods, such as identifying sensitive information such as face and license plate number in the target image through image recognition, and mosaic the above sensitive information. processing, so as to obtain the target image for subsequent sending to the terminal.
- the server may send the target image to the terminal according to the terminal identification.
- Step S204 in response to receiving the target image corresponding to the target road segment, rendering and displaying the road condition display control on the navigation page.
- the terminal may render and display a road condition display control for displaying the target image on the navigation page.
- the terminal may render and display the road condition display control at a predetermined position on the navigation page.
- the predetermined position may be any position in the navigation page, for example, may be the display position of the target road section in the navigation page, and/or the lower part of the navigation page and/or the left side of the navigation page and/or the right side of the navigation page, This embodiment does not make specific limitations.
- the terminal can display the target road section in different ways according to the received road section information, for example, display it by color distinction, which can make it easier for the user to view the target road section. Traffic information on different road sections.
- the target image can also be determined on the terminal side.
- FIG. 7 is a flowchart of the interaction method on the server side according to the first embodiment of the present invention. As shown in FIG. 7 , the method of this embodiment may include the following steps on the server side:
- Step S201 obtaining route navigation information.
- Step S202 determining the road condition information of each road segment in the route navigation information.
- Step S203 Determine and send a target image corresponding to the target road segment.
- FIG. 8 is a flowchart of the interaction method on the terminal side according to the first embodiment of the present invention. As shown in FIG. 8 , the method of this embodiment may include the following steps:
- Step S204 in response to receiving the target image corresponding to the target road segment, rendering and displaying the road condition display control on the navigation page.
- the server After acquiring the route navigation information of the terminal, the server in this embodiment determines the road condition information of each road segment according to the position of the target object relative to the corresponding lane of the corresponding road segment in the road condition collection sequence of each road segment in the previously acquired route navigation information, and After determining the target image corresponding to the road segment whose road condition information satisfies the predetermined road condition condition in the above road segment, the target image is sent to the terminal. After receiving the target image, the terminal may render and display a road condition display control for displaying the target image on the navigation page.
- the position of each target object can be accurately determined by means of image recognition, and the road condition information of each road section can be determined according to the position of each target object, and the road conditions of a specific road section can be displayed through a real-life image, which improves the accuracy of determining the road conditions. and timeliness, so that users can avoid congested road sections in time.
- FIG. 9 is a flowchart of an interaction method according to a second embodiment of the present invention. As shown in Figure 9, the method of this embodiment includes the following steps:
- Step S901 obtaining route navigation information.
- step S901 is similar to the implementation manner of step S201, and details are not described herein again.
- Step S902 determining the road condition information of each road segment in the route navigation information.
- step S902 is similar to the implementation manner of step S202, and details are not described herein again.
- Step S903 Determine and send a target image corresponding to the target road segment.
- step S903 is similar to the implementation manner of step S203, and details are not described herein again.
- Step S904 in response to receiving the target image corresponding to the target road segment, rendering and displaying the road condition display control on the navigation page.
- step S904 is similar to the implementation manner of step S204, and details are not described herein again.
- the process of determining the road condition information may also occur at the terminal side, that is, steps S301 to S305 may also be performed by the terminal.
- Step S905 determining and sending the link information of the target link.
- the server may also determine and send the road segment information of the target road segment.
- the road segment information may include road condition information of the target road segment, that is, congestion, slow travel, smooth flow, etc., and may also include the average driving speed and congestion length of the target road segment.
- the average driving speed and the congestion length can be determined according to the position of the target device where the image capture device for uploading the road condition capture image of the target road section is set.
- the target road section is xx street
- vehicle V1-vehicle V100 is the target device set with the image acquisition device for uploading the road condition acquisition sequence of xx street
- the server can record the road condition acquisition sequence according to the image acquisition device corresponding to vehicle V1-vehicle V100 respectively
- the average moving speed of vehicle V1 - vehicle V100 is determined respectively, and then the average driving speed of the target road section is determined according to the average moving speed of vehicle V1 - vehicle V100.
- the server can also determine the congestion length of the target road section according to the positions of vehicle V1-vehicle V100 at the same time. For example, if vehicle V1-vehicle V100 is distributed between 800 meters and 100 meters east of xx street, the server can determine The length of congestion on xx street is 700 meters.
- step S903 and step S905 may be performed simultaneously, or may be performed sequentially, which is not limited in this embodiment.
- Step S906 in response to receiving the road segment information of the target road segment, display the road segment information through the road condition display control.
- the terminal may also display the road section information of the target road section through the road condition display control.
- the terminal may display only part of the road section information, or may display all the road section information, which is not specifically limited in this embodiment.
- FIG. 10 is a schematic diagram of an interface according to an embodiment of the present invention.
- the interface shown in Figure 10 is a terminal interface.
- the page P1 is a navigation page
- the terminal can display the route navigation information 01 on the page P1, and display the target route in the route navigation information 101, that is, the road segment 02, in different colors.
- the terminal can also render and display the road condition display control at the display position of the road section 102, that is, the control 103 and the bottom of the navigation page, render and display the road condition display control, that is, the control 104.
- the road segment name ie, xx street
- road condition information ie, congestion
- congestion length ie, congestion xxx meters
- the terminal may display only the control 03, or only the control 104, or may display the control 103 and the control 104 at the same time, which is not specifically limited in this embodiment.
- the road section information of the target road section can also be determined on the terminal side.
- Step S907 Determine and send a target image sequence corresponding to the target road segment.
- the server may determine the road condition collection sequence including the target image as the target sequence, or may cut out a sequence segment of a predetermined length (for example, 10 seconds) from the road condition collection sequence including the target image as the target sequence. Examples are not specifically limited.
- the server can also remove the sensitive information of the collected images of each road condition according to the order of the collected images of each road condition in the target image sequence through various existing methods, and then collect the images according to each road condition after removing the sensitive information to obtain the image for subsequent use.
- the target image sequence sent to the terminal.
- the server may send the target image sequence to the terminal according to the terminal identification.
- step S903 and step S907 may be performed simultaneously, or may be performed sequentially, which is not specifically limited in this embodiment.
- Step S908 receiving a target image sequence corresponding to the target road segment.
- the terminal may also receive the target image sequence including the target image sent by the server.
- the target image sequence can also be determined on the terminal side.
- Step S909 in response to the road condition display control being triggered, the video playback page is displayed.
- Characterizing the road condition information of the target road segment by a single image may have limitations for the user, so in this embodiment, the road condition information of the target road segment is more clearly represented by the target image sequence.
- the terminal can display a video playback page for playing the target image sequence.
- Step S910 play the target image sequence through the video playing page.
- the terminal can automatically play the target image sequence through the video playback page, so as to avoid the possibility of unnecessary distraction of the user's attention caused by the user's need for multiple operations during driving the vehicle.
- FIG. 11 is another interface schematic diagram of an embodiment of the present invention.
- the interface shown in FIG. 10 is taken as an example for description.
- the terminal can display the video playback page shown in FIG. 11 , namely page P2, and play the target video sequence 111 including the target image through the video playback page.
- FIG. 12 is a flow chart on the server side of the interaction method according to the second embodiment of the present invention. As shown in FIG. 12 , the method of this embodiment may include the following steps on the server side:
- Step S901 obtaining route navigation information.
- Step S902 determining the road condition information of each road segment in the route navigation information.
- Step S903 Determine and send a target image corresponding to the target road segment.
- Step S905 sending the link information of the target link.
- Step S907 Determine and send a target image sequence corresponding to the target road segment.
- FIG. 13 is a flow chart on the terminal side of the interaction method according to the second embodiment of the present invention. As shown in FIG. 13 , the method of this embodiment may include the following steps:
- Step S904 in response to receiving the target image corresponding to the target road segment, rendering and displaying the road condition display control on the navigation page.
- Step S906 in response to receiving the road segment information of the target road segment, display the road segment information through the road condition display control.
- Step S908 receiving a target image sequence corresponding to the target road segment.
- Step S909 in response to the road condition display control being triggered, the video playback page is displayed.
- Step S910 play the target image sequence through the video playing page.
- the server After acquiring the route navigation information of the terminal, the server in this embodiment determines the road condition information of each road segment according to the position of the target object relative to the corresponding lane of the corresponding road segment in the road condition collection sequence of each road segment in the previously acquired route navigation information, and After determining the target image corresponding to the road segment whose road condition information satisfies the predetermined road condition condition in the above road segment, the target image is sent to the terminal.
- the server may also send the target image sequence including the target image and the road section information of the target road section to the terminal.
- the terminal After receiving the target image, the terminal may render and display a road condition display control for displaying the target image on the navigation page.
- the terminal may display the road section information through the road condition display control, and after receiving the target image sequence, in response to the road condition display control being triggered, display the video playback page, and display the video playback page through the video.
- the play page plays the target image sequence.
- the position of each target object can be accurately determined by means of image recognition, and the road condition information of each road section can be determined according to the position of each target object, and the road conditions of a specific road section can be displayed through real-life images and real-life videos, which improves the determination of road conditions. With high accuracy and timeliness, users can avoid congested road sections in time.
- FIG. 14 is a schematic diagram of an interaction system according to a third embodiment of the present invention. As shown in FIG. 14 , the system of this embodiment includes an interaction device 14A and an interaction device 14B.
- the interaction device 14A is applicable to the server side, and includes a navigation information acquisition unit 1401 , a road condition information determination unit 1402 and an image transmission unit 1403 .
- the navigation information acquisition unit 1401 is used for acquiring route navigation information.
- the road condition information determining unit 1402 is configured to determine the road condition information of each road segment in the route navigation information, where the road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road segment.
- the image sending unit 1403 is configured to determine and send a target image corresponding to a target road section, where the target road section is a road section whose road condition information in the route navigation information satisfies a predetermined road condition condition.
- the road condition information is determined by the road segment determination unit 1404 , the position determination unit 1405 , the traffic state determination unit 1406 , the congestion state determination unit 1407 and the road condition information determination unit 1408 .
- the road segment determination unit 1404 is used to determine the road segment to be determined.
- the position determination unit 1405 is configured to perform image recognition on each of the collected images of road conditions in the image sequence to be identified, and determine the position of the target object in each of the collected images of road conditions, and the image sequence to be identified corresponds to the road section to be determined.
- the passing state determining unit 1406 is configured to determine the passable state of the lane corresponding to the target object according to the position of the target object.
- the congestion state determination unit 1407 is configured to determine the congestion state of the first road section corresponding to the image sequence to be recognized according to the passable state of each lane.
- the road condition information determining unit 1408 is configured to determine the road condition information of the to-be-determined road segment according to the congestion state of the first road segment.
- the congestion state determination unit 1407 includes a second state determination subunit and a first state determination subunit.
- the second state determination subunit is configured to determine the corresponding congestion state of the second road section according to the passable state of each of the lanes corresponding to the collected images of each of the road conditions.
- the first state determination subunit is configured to determine the congestion state of the first road segment according to the congestion state of each of the second road segments.
- the traffic state determination unit 1406 includes a first distance determination subunit, a second distance determination subunit, and a traffic state determination subunit.
- the first distance determination subunit is used to determine the target distance corresponding to the target object according to the position of the target object, and the target distance is used to represent the maximum distance between the target object and the lane line of the corresponding lane.
- the second distance determination subunit is used for determining the traversable distance corresponding to the target device, where the target device is the device corresponding to the image sequence to be recognized.
- the passable state determination subunit is configured to determine the passable state of the lane according to the target distance and the passable distance.
- the traffic state determination subunit includes a first state determination module and a second state determination module.
- the first state determination module is configured to determine that the passable state of the lane is passable in response to the target distance being not less than the passable distance.
- the second state determination module is configured to determine that the passable state of the lane is impassable in response to the target distance being less than the passable distance.
- the second state determination subunit includes a third state determination module, a fourth state determination module and a fifth state determination module.
- the third state determination module is configured to determine that the congestion state of the second road section is congestion in response to that the corresponding passable states of the lanes are all impassable.
- the fourth state determination module is configured to determine that the congestion state of the second road section is unblocked in response to the corresponding passable states of the lanes being passable.
- the fifth state determination module is configured to determine that the congestion state of the second road segment is slow driving in response to the corresponding at least one passable state of the lane being impassable and the passable state of at least one of the lanes being passable.
- the first state determination subunit includes a sixth state determination module, a seventh state determination module and an eighth state determination module.
- the sixth state determination module is configured to determine that the congestion state of the first road segment is congestion in response to the congestion state of the second road section corresponding to the collected images of successive road conditions being congestion.
- a seventh state determination module is configured to determine that the congestion state of the second road section corresponding to a plurality of consecutive road condition collection images is slow driving, and determine that the congestion state of the first road section is slow driving.
- the eighth state determination module is configured to determine that the congestion state of the first road segment is unblocked in response to the number of the congestion states of the second road segment corresponding to the consecutive plurality of road condition collection images being unblocked satisfying a third quantity condition.
- the road condition information determination unit 1408 includes a state acquisition subunit, a quantity determination subunit, a first road condition determination subunit, a second road condition determination subunit, and a third road condition determination subunit.
- the state acquisition subunit is used to acquire the congestion state of the first road section corresponding to each of the to-be-identified image sequences in the image sequence set, and the image sequence set includes a plurality of corresponding to-be-determined road sections within the same time period the sequence of images to be identified.
- the quantity determination subunit is used for determining a first quantity, a second quantity and a third quantity, where the first quantity is used to represent the number of the image sequences to be identified in which the congestion state of the first road section in the image sequence set is unblocked.
- the congestion state of the first road segment is the number of the image sequences to be identified that are congested.
- the first road condition determination subunit is configured to determine the road condition information as clear in response to the first quantity being greater than the second quantity and the first quantity being greater than the third quantity.
- the second road condition determination subunit is configured to determine the road condition information as slow travel in response to the second quantity being greater than the first quantity and the second quantity being greater than the third quantity.
- a third road condition determination subunit is configured to determine the road condition information as congestion in response to the third quantity being greater than the first quantity and the third quantity being greater than the second quantity.
- the image sending unit 1403 includes a quantity definition determination subunit and an image determination subunit.
- the number and definition determining subunit is used to determine the number of target objects in each of the road condition collected images and/or the clarity of each of the road condition collected images.
- the image determination subunit is configured to determine a target image according to the quantity and/or definition of the target object corresponding to each of the collected images of the road conditions.
- the apparatus 14A further includes a sequence sending unit 1409 .
- the sequence sending unit 1409 is configured to determine and send a target image sequence corresponding to the target road section, where the target image sequence includes the target image.
- the apparatus 14A further includes a link information sending unit 1410 .
- the road section information sending unit 1410 is configured to determine and send the road section information of the target road section, where the road section information includes road condition information of the target road section.
- the interaction device 14B is suitable for a terminal, and includes a control display unit 1411 .
- the control display unit 1411 is configured to render and display the road condition display control on the navigation page in response to receiving the target image corresponding to the target road segment.
- the target image is determined based on pre-uploaded route navigation information
- the road condition display control is used to display the target image
- the target road segment is a road segment in which the road condition information in the route navigation information satisfies a predetermined road condition condition
- the The road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road segment in the route navigation information.
- the road condition information is determined by the road segment determination unit 1404 , the position determination unit 1405 , the traffic state determination unit 1406 , the congestion state determination unit 1407 and the road condition information determination unit 1408 .
- the road segment determination unit 1404 is used to determine the road segment to be determined.
- the position determination unit 1405 is configured to perform image recognition on each of the collected images of road conditions in the image sequence to be identified, and determine the position of the target object in each of the collected images of road conditions, and the image sequence to be identified corresponds to the road section to be determined.
- the passing state determining unit 1406 is configured to determine the passable state of the lane corresponding to the target object according to the position of the target object.
- the congestion state determination unit 1407 is configured to determine the congestion state of the first road section corresponding to the image sequence to be recognized according to the passable state of each lane.
- the road condition information determining unit 1408 is configured to determine the road condition information of the to-be-determined road segment according to the congestion state of the first road segment.
- the congestion state determination unit 1407 includes a second state determination subunit and a first state determination subunit.
- the second state determination subunit is configured to determine the corresponding congestion state of the second road section according to the passable state of each of the lanes corresponding to the collected images of each of the road conditions.
- the first state determination subunit is configured to determine the congestion state of the first road segment according to the congestion state of each of the second road segments.
- the traffic state determination unit 1406 includes a first distance determination subunit, a second distance determination subunit, and a traffic state determination subunit.
- the first distance determination subunit is used to determine the target distance corresponding to the target object according to the position of the target object, and the target distance is used to represent the maximum distance between the target object and the lane line of the corresponding lane.
- the second distance determination subunit is used for determining the traversable distance corresponding to the target device, where the target device is the device corresponding to the image sequence to be recognized.
- the passable state determination subunit is configured to determine the passable state of the lane according to the target distance and the passable distance.
- the traffic state determination subunit includes a first state determination module and a second state determination module.
- the first state determination module is configured to determine that the passable state of the lane is passable in response to the target distance being not less than the passable distance.
- the second state determination module is configured to determine that the passable state of the lane is impassable in response to the target distance being less than the passable distance.
- the second state determination subunit includes a third state determination module, a fourth state determination module and a fifth state determination module.
- the third state determination module is configured to determine that the congestion state of the second road section is congestion in response to that the corresponding passable states of the lanes are all impassable.
- the fourth state determination module is configured to determine that the congestion state of the second road section is unblocked in response to the corresponding passable states of the lanes being passable.
- the fifth state determination module is configured to determine that the congestion state of the second road segment is slow driving in response to the corresponding at least one passable state of the lane being impassable and the passable state of at least one of the lanes being passable.
- the first state determination subunit includes a sixth state determination module, a seventh state determination module and an eighth state determination module.
- the sixth state determination module is configured to determine that the congestion state of the first road segment is congestion in response to the congestion state of the second road section corresponding to the collected images of successive road conditions being congestion.
- a seventh state determination module is configured to determine that the congestion state of the second road section corresponding to a plurality of consecutive road condition collection images is slow driving, and determine that the congestion state of the first road section is slow driving.
- the eighth state determination module is configured to determine that the congestion state of the first road segment is unblocked in response to the number of the congestion states of the second road segment corresponding to the consecutive plurality of road condition collection images being unblocked satisfying a third quantity condition.
- the road condition information determination unit 1408 includes a state acquisition subunit, a quantity determination subunit, a first road condition determination subunit, a second road condition determination subunit, and a third road condition determination subunit.
- the state acquisition subunit is used to acquire the congestion state of the first road section corresponding to each of the to-be-identified image sequences in the image sequence set, and the image sequence set includes a plurality of corresponding to-be-determined road sections within the same time period the sequence of images to be identified.
- the quantity determination subunit is used for determining a first quantity, a second quantity and a third quantity, where the first quantity is used to represent the number of the image sequences to be identified in which the congestion state of the first road section in the image sequence set is unblocked.
- the congestion state of the first road segment is the number of the image sequences to be identified that are congested.
- the first road condition determination subunit is configured to determine the road condition information as clear in response to the first quantity being greater than the second quantity and the first quantity being greater than the third quantity.
- the second road condition determination subunit is configured to determine the road condition information as slow travel in response to the second quantity being greater than the first quantity and the second quantity being greater than the third quantity.
- a third road condition determination subunit is configured to determine the road condition information as congestion in response to the third quantity being greater than the first quantity and the third quantity being greater than the second quantity.
- the target image is determined according to the number of target objects in each road condition collected image and/or the clarity of each of the road condition collected images, and the road condition collected image is the one in the road condition collection sequence corresponding to the target road section. image.
- the apparatus 14B further includes a sequence receiving unit 1412 , a page displaying unit 1413 and an image sequence playing unit 1414 .
- the sequence receiving unit 1412 is configured to receive a target image sequence corresponding to the target road section, where the target image sequence includes the target image.
- the page display unit 1413 is configured to display the video playback page in response to the road condition display control being triggered.
- the image sequence playing unit 1414 is configured to play the target image sequence through the video playing page.
- control display unit 1411 is configured to render and display the road condition display control at a predetermined position of the navigation page.
- the predetermined position is the display position of the target road section in the navigation page and/or the lower part of the navigation page and/or the upper part of the navigation page and/or the left and the left side of the navigation page. / or the right side of the navigation page.
- the apparatus 14B further includes a road segment information display unit 1415 .
- the road segment information display unit 1415 is configured to display the road segment information through the road condition display control in response to receiving the road segment information of the target road segment, where the road segment information includes road condition information of the target road segment.
- the server After acquiring the route navigation information of the terminal, the server in this embodiment determines the road condition information of each road segment according to the position of the target object relative to the corresponding lane of the corresponding road segment in the road condition collection sequence of each road segment in the previously acquired route navigation information, and After determining the target image corresponding to the road segment whose road condition information satisfies the predetermined road condition condition in the above road segment, the target image is sent to the terminal.
- the server may also send the target image sequence including the target image and the road section information of the target road section to the terminal.
- the terminal After receiving the target image, the terminal may render and display a road condition display control for displaying the target image on the navigation page.
- the terminal may display the road section information through the road condition display control, and after receiving the target image sequence, in response to the road condition display control being triggered, display the video playback page, and display the video playback page through the video.
- the play page plays the target image sequence.
- the position of each target object can be accurately determined by means of image recognition, and the road condition information of each road section can be determined according to the position of each target object, and the road conditions of a specific road section can be displayed through real-life images and real-life videos, which improves the determination of road conditions. With high accuracy and timeliness, users can avoid congested road sections in time.
- FIG. 15 is a schematic diagram of an electronic device according to a fourth embodiment of the present invention.
- the electronic device shown in FIG. 15 is a general-purpose data processing apparatus, which includes a general-purpose computer hardware structure, which at least includes a processor 1501 and a memory 1502 .
- the processor 1501 and the memory 1502 are connected by a bus 1503 .
- Memory 1502 is adapted to store instructions or programs executable by processor 1501 .
- the processor 1501 may be an independent microprocessor, or may be a set of one or more microprocessors. Thus, the processor 1501 executes the commands stored in the memory 1502 to execute the above-described method flow of the embodiments of the present invention to process data and control other devices.
- the bus 1503 connects the above-mentioned various components together, while connecting the above-mentioned components to the display controller 1504 and the display device and the input/output (I/O) device 1505 .
- the input/output (I/O) device 1505 may be a mouse, keyboard, modem, network interface, touch input device, somatosensory input device, printer, and other devices known in the art.
- input/output (I/O) devices 1505 are connected to the system through input/output (I/O) controllers 1506 .
- the memory 1502 may store software components, such as operating systems, communication modules, interaction modules, and application programs.
- software components such as operating systems, communication modules, interaction modules, and application programs.
- Each of the modules and applications described above corresponds to a set of executable program instructions that perform one or more functions and methods described in embodiments of the invention.
- aspects of the embodiments of the present invention may be implemented as a system, method or computer program product. Accordingly, various aspects of embodiments of the present invention may take the form of an entirely hardware implementation, an entirely software implementation (including firmware, resident software, microcode, etc.), or may be generally referred to herein as "circuits," “modules,” ” or “system” that combines software aspects with hardware aspects. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
- the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
- the computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or apparatus, or any suitable combination of the foregoing.
- a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, device, or apparatus.
- a computer-readable signal medium may include a propagated data signal having computer-readable program code embodied therein, such as in baseband or as part of a carrier wave. Such propagated signals may take any of a variety of forms including, but not limited to, electromagnetic, optical, or any suitable combination thereof.
- a computer-readable signal medium can be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, and communicate a program for use by or in conjunction with the instruction execution system, apparatus, or apparatus. or transmission.
- Computer program code for carrying out operations directed to aspects of the present invention may be written in any combination of one or more programming languages including: object-oriented programming languages such as Java, Smalltalk, C++, PHP, Python etc.; and conventional procedural programming languages such as the "C" programming language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package; partly on the user's computer and partly on a remote computer; or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, by using an Internet service provider's Internet) .
- LAN local area network
- WAN wide area network
- Internet service provider's Internet an external computer
Abstract
Description
Claims (29)
- 一种交互方法,其特征在于,所述方法包括:获取路径导航信息;确定所述路径导航信息中各路段的路况信息,所述路况信息根据各路段对应的路况采集序列中目标对象的位置确定;确定并发送目标路段对应的目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段。
- 根据权利要求1所述的方法,其特征在于,所述路况信息通过如下步骤确定:确定待确定路段;对待识别图像序列中各所述路况采集图像分别进行图像识别,确定各所述路况采集图像中所述目标对象的位置,所述待识别图像序列为所述待确定路段对应的路况采集序列;根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态;根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态;根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息。
- 根据权利要求2所述的方法,其特征在于,所述根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态包括:根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态;根据各所述第二路段拥堵状态确定所述第一路段拥堵状态。
- 根据权利要求2所述的方法,其特征在于,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置;所述根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态包括:根据所述目标对象的位置确定所述目标对象对应的目标距离,所述目标距离用于表征所述目标对象与对应车道的车道线的最大距离;确定目标设备对应的可通行距离,所述目标设备为所述待识别图像序列对应的设备;根据所述目标距离和所述可通行距离确定所述车道可通行状态。
- 根据权利要求4所述的方法,其特征在于,所述根据所述目标距离和所述可通行距离确定所述车道可通行状态包括:响应于所述目标距离不小于所述可通行距离,确定所述车道可通行状态为可通行;响应于所述目标距离小于所述可通行距离,确定所述车道可通行状态为不可通行。
- 根据权利要求3所述的方法,其特征在于,所述根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态包括:响应于对应的各所述车道可通行状态均为不可通行,确定所述第二路段拥堵状态为拥堵;响应于对应的各所述车道可通行状态均为可通行,确定所述第二路段拥堵状态为畅通;响应于对应的至少一个所述车道可通行状态为不可通行且至少一个所述车道可通行状态为可通行,确定所述第二路段拥堵状态为缓行。
- 根据权利要求3所述的方法,其特征在于,所述根据各所述第二路段拥堵状态确定所述第一路段拥堵状态包括:响应于连续多个路况采集图像对应的所述第二路段拥堵状态均为拥堵,确定所述第一路段拥 堵状态为拥堵;响应于连续多个路况采集图像对应的所述第二路段拥堵状态为缓行,确定所述第一路段拥堵状态为缓行;响应于连续多个路况采集图像对应的所述第二路段拥堵状态为畅通的数量满足第三数量条件,确定所述第一路段拥堵状态为畅通。
- 根据权利要求2所述的方法,其特征在于,所述根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息包括:获取图像序列集合中各所述待识别图像序列对应的所述第一路段拥堵状态,所述图像序列集合包括所述待确定路段在同一时间段内对应的多个所述待识别图像序列;确定第一数量、第二数量和第三数量,所述第一数量用于表征所述图像序列集合中所述第一路段拥堵状态为畅通的所述待识别图像序列的数量,所述第二数量用于表征所述图像序列集合中所述第一路段拥堵状态为缓行的所述待识别图像序列的数量,所述第三数量用于表征所述图像序列集合中所述第一路段拥堵状态为拥堵的所述待识别图像序列数量;响应于所述第一数量大于所述第二数量且所述第一数量大于所述第三数量,将所述路况信息确定为畅通;响应于所述第二数量大于所述第一数量且所述第二数量大于所述第三数量,将所述路况信息确定为缓行;响应于所述第三数量大于所述第一数量且所述第三数量大于所述第二数量,将所述路况信息确定为拥堵。
- 根据权利要求1所述的方法,其特征在于,所述确定并发送目标路段对应的目标图像包括:确定各所述路况采集图像中目标对象的数量和/或各所述路况采集图像的清晰度;根据各所述路况采集图像对应的所述目标对象的数量和/或清晰度确定目标图像。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:确定并发送所述目标路段对应的目标图像序列,所述目标图像序列包括所述目标图像。
- 根据权利要求1或10所述的方法,其特征在于,所述方法还包括:确定并发送所述目标路段的路段信息,所述路段信息包括所述目标路段的路况信息。
- 一种交互方法,其特征在于,所述方法包括:响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件;其中,所述目标图像基于预先上传的路径导航信息确定,所述路况展示控件用于展示所述目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段,所述路况信息根据所述路径导航信息中各路段对应的路况采集序列中目标对象的位置确定。
- 根据权利要求12所述的方法,其特征在于,所述路况信息通过如下步骤确定:确定待确定路段;对待识别图像序列中各所述路况采集图像分别进行图像识别,确定各所述路况采集图像中所述目标对象的位置,所述待识别图像序列为所述待确定路段对应的路况采集序列;根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态;根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态;根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息。
- 根据权利要求13所述的方法,其特征在于,所述根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态包括:根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态;根据各所述第二路段拥堵状态确定所述第一路段拥堵状态。
- 根据权利要求13所述的方法,其特征在于,所述根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态包括:根据所述目标对象的位置确定所述目标对象对应的目标距离,所述目标距离用于表征所述目标对象与对应车道的车道线的最大距离;确定目标设备对应的可通行距离,所述目标设备为所述待识别图像序列对应的设备;根据所述目标距离和所述可通行距离确定所述车道可通行状态。
- 根据权利要求15所述的方法,其特征在于,所述根据所述目标距离和所述可通行距离确定所述车道可通行状态包括:响应于所述目标距离不小于所述可通行距离,确定所述车道可通行状态为可通行;响应于所述目标距离小于所述可通行距离,确定所述车道可通行状态为不可通行。
- 根据权利要求14所述的方法,其特征在于,所述根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态包括:响应于对应的各所述车道可通行状态均为不可通行,确定所述第二路段拥堵状态为拥堵;响应于对应的各所述车道可通行状态均为可通行,确定所述第二路段拥堵状态为畅通;响应于对应的至少一个所述车道可通行状态为不可通行且至少一个所述车道可通行状态为可通行,确定所述第二路段拥堵状态为缓行。
- 根据权利要求14所述的方法,其特征在于,所述根据各所述第二路段拥堵状态确定所述第一路段拥堵状态包括:响应于连续多个路况采集图像对应的所述第二路段拥堵状态均为拥堵,确定所述第一路段拥堵状态为拥堵;响应于连续多个路况采集图像对应的所述第二路段拥堵状态为缓行,确定所述第一路段拥堵状态为缓行;响应于连续多个路况采集图像对应的所述第二路段拥堵状态为畅通的数量满足第三数量条件,确定所述第一路段拥堵状态为畅通。
- 根据权利要求13所述的方法,其特征在于,所述根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息包括:获取图像序列集合中各所述待识别图像序列对应的所述第一路段拥堵状态,所述图像序列集合包括所述待确定路段在同一时间段内对应的多个所述待识别图像序列;确定第一数量、第二数量和第三数量,所述第一数量用于表征所述图像序列集合中所述第一路段拥堵状态为畅通的所述待识别图像序列的数量,所述第二数量用于表征所述图像序列集合中所述第一路段拥堵状态为缓行的所述待识别图像序列的数量,所述第三数量用于表征所述图像序列集合中所述第一路段拥堵状态为拥堵的所述待识别图像序列数量;响应于所述第一数量大于所述第二数量且所述第一数量大于所述第三数量,将所述路况信息确定为畅通;响应于所述第二数量大于所述第一数量且所述第二数量大于所述第三数量,将所述路况信息确定为缓行;响应于所述第三数量大于所述第一数量且所述第三数量大于所述第二数量,将所述路况信息确定为拥堵。
- 根据权利要求12所述的方法,其特征在于,所述目标图像根据各路况采集图像中目标对象的数量和/或各所述路况采集图像的清晰度确定,所述路况采集图像为所述目标路段对应的所述路况采集序列中的图像。
- 根据权利要求12所述的方法,其特征在于,所述方法还包括:接收所述目标路段对应的目标图像序列,所述目标图像序列包括所述目标图像;响应于所述路况展示控件被触发,展示视频播放页面;通过所述视频播放页面播放所述目标图像序列。
- 根据权利要求12所述的方法,其特征在于,所述在所述导航页面渲染显示路况展示控件包括:在所述导航页面的预定位置渲染显示所述路况展示控件。
- 根据权利要求22所述的方法,其特征在于,所述预定位置为所述目标路段在所述导航页面中的显示位置和/或所述导航页面的下方和/或所述导航页面的上方和/或所述导航页面的左侧和/或所述导航页面的右侧。
- 根据权利要求12或21所述的方法,其特征在于,所述方法还包括:响应于接收到所述目标路段的路段信息,通过所述路况展示控件展示所述路段信息,所述路段信息包括所述目标路段的路况信息。
- 一种交互装置,其特征在于,所述装置包括:导航信息获取单元,用于获取路径导航信息;路况信息确定单元,用于确定所述路径导航信息中各路段的路况信息,所述路况信息根据各路段对应的路况采集序列中目标对象的位置确定,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置;图像发送单元,用于确定并发送目标路段对应的目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段。
- 一种交互装置,其特征在于,所述装置包括:控件显示单元,用于响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件;其中,所述目标图像基于预先上传的路径导航信息确定,所述路况展示控件用于展示所述目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段,所述路况信息根据所述路径导航信息中各路段对应的路况采集序列中目标对象的位置确定,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置。
- 一种计算机可读存储介质,其上存储计算机程序指令,其特征在于,所述计算机程序指令在被处理器执行时实现如权利要求1-24中任一项所述的方法。
- 一种电子设备,包括存储器和处理器,其特征在于,所述存储器用于存储一条或多条计算机程序指令,其中,所述一条或多条计算机程序指令被所述处理器执行以实现如权利要求1-24中任一项所述的方法。
- 一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行以实现如权利要求1-24中任一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112023019025A BR112023019025A2 (pt) | 2021-03-23 | 2022-02-23 | Métodos e dispositivos de interação |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110309280.4 | 2021-03-23 | ||
CN202110309280.4A CN113048982B (zh) | 2021-03-23 | 2021-03-23 | 交互方法和交互装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022199311A1 true WO2022199311A1 (zh) | 2022-09-29 |
Family
ID=76514635
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/077520 WO2022199311A1 (zh) | 2021-03-23 | 2022-02-23 | 交互方法和交互装置 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN113048982B (zh) |
BR (1) | BR112023019025A2 (zh) |
WO (1) | WO2022199311A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116972871A (zh) * | 2023-09-25 | 2023-10-31 | 苏州元脑智能科技有限公司 | 一种行车路径推送方法、装置、可读存储介质及系统 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113048982B (zh) * | 2021-03-23 | 2022-07-01 | 北京嘀嘀无限科技发展有限公司 | 交互方法和交互装置 |
CN113470408A (zh) * | 2021-07-16 | 2021-10-01 | 浙江数智交院科技股份有限公司 | 一种交通信息提示方法、装置、电子设备及存储介质 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060167626A1 (en) * | 2005-01-24 | 2006-07-27 | Denso Corporation | Navigation system and program for controlling the same |
CN105225496A (zh) * | 2015-09-02 | 2016-01-06 | 上海斐讯数据通信技术有限公司 | 道路交通预警系统 |
CN108010362A (zh) * | 2017-12-29 | 2018-05-08 | 百度在线网络技术(北京)有限公司 | 行车路况信息推送的方法、装置、存储介质及终端设备 |
CN110364008A (zh) * | 2019-08-16 | 2019-10-22 | 腾讯科技(深圳)有限公司 | 路况确定方法、装置、计算机设备和存储介质 |
US20200175855A1 (en) * | 2018-12-03 | 2020-06-04 | Hyundai Motor Company | Traffic information service apparatus and method |
CN111314651A (zh) * | 2018-12-11 | 2020-06-19 | 上海博泰悦臻电子设备制造有限公司 | 基于v2x技术的路况显示方法、系统、v2x终端及v2x服务器 |
CN111325999A (zh) * | 2018-12-14 | 2020-06-23 | 奥迪股份公司 | 车辆驾驶辅助方法、装置、计算机设备和存储介质 |
CN111739294A (zh) * | 2020-06-11 | 2020-10-02 | 腾讯科技(深圳)有限公司 | 一种路况信息收集方法、装置、设备及存储介质 |
CN113048982A (zh) * | 2021-03-23 | 2021-06-29 | 北京嘀嘀无限科技发展有限公司 | 交互方法和交互装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008031779A (ja) * | 2006-07-31 | 2008-02-14 | Atsunobu Sakamoto | 自動車道路の渋滞防止 |
CN104851295B (zh) * | 2015-05-22 | 2017-08-04 | 北京嘀嘀无限科技发展有限公司 | 获取路况信息的方法和系统 |
CN109326123B (zh) * | 2018-11-15 | 2021-01-26 | 中国联合网络通信集团有限公司 | 路况信息处理方法和装置 |
-
2021
- 2021-03-23 CN CN202110309280.4A patent/CN113048982B/zh active Active
-
2022
- 2022-02-23 BR BR112023019025A patent/BR112023019025A2/pt unknown
- 2022-02-23 WO PCT/CN2022/077520 patent/WO2022199311A1/zh active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060167626A1 (en) * | 2005-01-24 | 2006-07-27 | Denso Corporation | Navigation system and program for controlling the same |
CN105225496A (zh) * | 2015-09-02 | 2016-01-06 | 上海斐讯数据通信技术有限公司 | 道路交通预警系统 |
CN108010362A (zh) * | 2017-12-29 | 2018-05-08 | 百度在线网络技术(北京)有限公司 | 行车路况信息推送的方法、装置、存储介质及终端设备 |
US20200175855A1 (en) * | 2018-12-03 | 2020-06-04 | Hyundai Motor Company | Traffic information service apparatus and method |
CN111314651A (zh) * | 2018-12-11 | 2020-06-19 | 上海博泰悦臻电子设备制造有限公司 | 基于v2x技术的路况显示方法、系统、v2x终端及v2x服务器 |
CN111325999A (zh) * | 2018-12-14 | 2020-06-23 | 奥迪股份公司 | 车辆驾驶辅助方法、装置、计算机设备和存储介质 |
CN110364008A (zh) * | 2019-08-16 | 2019-10-22 | 腾讯科技(深圳)有限公司 | 路况确定方法、装置、计算机设备和存储介质 |
CN111739294A (zh) * | 2020-06-11 | 2020-10-02 | 腾讯科技(深圳)有限公司 | 一种路况信息收集方法、装置、设备及存储介质 |
CN113048982A (zh) * | 2021-03-23 | 2021-06-29 | 北京嘀嘀无限科技发展有限公司 | 交互方法和交互装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116972871A (zh) * | 2023-09-25 | 2023-10-31 | 苏州元脑智能科技有限公司 | 一种行车路径推送方法、装置、可读存储介质及系统 |
CN116972871B (zh) * | 2023-09-25 | 2024-01-23 | 苏州元脑智能科技有限公司 | 一种行车路径推送方法、装置、可读存储介质及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN113048982A (zh) | 2021-06-29 |
CN113048982B (zh) | 2022-07-01 |
BR112023019025A2 (pt) | 2023-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022199311A1 (zh) | 交互方法和交互装置 | |
US10077986B2 (en) | Storing trajectory | |
CN113029177B (zh) | 基于频率的通行行程表征 | |
TWI684746B (zh) | 用於在地圖上顯示運輸工具的移動和行駛路徑的系統、方法和非暫時性電腦可讀取媒體 | |
US10066954B1 (en) | Parking suggestions | |
JP6488594B2 (ja) | 自動運転支援システム、自動運転支援方法及びコンピュータプログラム | |
US11441918B2 (en) | Machine learning model for predicting speed based on vehicle type | |
RU2677164C2 (ru) | Способ и сервер создания прогнозов трафика | |
KR20210137197A (ko) | 경로 생성 방법 및 장치, 전자 기기 및 기억 매체 | |
US9752886B2 (en) | Mobile trip planner and live route update system | |
TW201250208A (en) | Navigation system and route planning method thereof | |
JP7106794B2 (ja) | 道路状況予測方法、装置、機器、プログラム及びコンピュータ記憶媒体 | |
US20220155091A1 (en) | Landmark-assisted navigation | |
JP5949435B2 (ja) | ナビゲーションシステム、映像サーバ、映像管理方法、映像管理プログラム、及び映像提示端末 | |
US20180017406A1 (en) | Real-Time Mapping Using Geohashing | |
JP2015076078A (ja) | 渋滞予測システム、端末装置、渋滞予測方法および渋滞予測プログラム | |
JP2015076077A (ja) | 交通量推定システム、端末装置、交通量推定方法および交通量推定プログラム | |
JP6786376B2 (ja) | 評価装置、評価方法及び評価プログラム | |
JP2019020172A (ja) | 経路提案装置および経路提案方法 | |
US20230097373A1 (en) | Traffic monitoring, analysis, and prediction | |
RU2664034C1 (ru) | Способ и система создания информации о трафике, которая будет использована в картографическом приложении, выполняемом на электронном устройстве | |
US11719554B2 (en) | Determining dissimilarities between digital maps and a road network using predicted route data and real trace data | |
JP6606354B2 (ja) | 経路表示方法、経路表示装置及びデータベース作成方法 | |
US20210364312A1 (en) | Routes on Digital Maps with Interactive Turn Graphics | |
Richly et al. | Predicting location probabilities of drivers to improved dispatch decisions of transportation network companies based on trajectory data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22773973 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2023/011293 Country of ref document: MX |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112023019025 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112023019025 Country of ref document: BR Kind code of ref document: A2 Effective date: 20230919 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/01/2024) |