EP3759952A1 - Traffic management of proprietary data in a network - Google Patents
Traffic management of proprietary data in a networkInfo
- Publication number
- EP3759952A1 EP3759952A1 EP19712332.6A EP19712332A EP3759952A1 EP 3759952 A1 EP3759952 A1 EP 3759952A1 EP 19712332 A EP19712332 A EP 19712332A EP 3759952 A1 EP3759952 A1 EP 3759952A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- processor
- data
- determining
- dedicated pipeline
- sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/38—Services specially adapted for particular environments, situations or purposes for collecting sensor information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/40—Bus networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4604—LAN interconnection over a backbone network, e.g. Internet, Frame Relay
- H04L12/462—LAN interconnection over a bridge based backbone
- H04L12/4625—Single bridge functionality, e.g. connection of two networks over a single bridge
Definitions
- the present invention relates to the field of data transmission in a network.
- Such networks may include sensors, which work in tangent to collect information about various aspects of the vehicle’s operation, and one or more gateway nodes that transmit the collected data to a remote server for analysis.
- These networks are often open networks, meaning that all nodes on the network are able to observe all data transmitted by every other node. However, this may become problematic in a vehicle network made up of sensors from competing
- aspects of embodiments of the present invention are directed to a sensor network having sensors connected through a system bus, whereby every sensor on the system bus is segregated from all other sensors by a bridging device, which may selectively allow or block data traffic to pass to the sensor based on the sensitivity of the data being transmitted on the system bus.
- the sensor network is utilized in a smart trailer system of a commercial vehicle capable of communicating sensor data to a remote server.
- a method for traffic management of proprietary data in a network system comprising a gateway and a sensor communicatively coupled to the gateway via a data bus, the method including: determining, by a processor of a bridging device, whether a dedicated pipeline for transmission to the gateway is available; in response to determining that the dedicated pipeline is available: transmitting, by the processor, a request for the dedicated pipeline; determining, by the processor, whether the dedicated pipeline has been established between the bridging device and the gateway; and in response to determining that the dedicated pipe has been established with the bridging device: requesting and queueing, by the processor, the proprietary data from the sensor; transmitting, by the processor, the proprietary data from the sensor to the gateway via the dedicated pipeline; and transmitting, by the processor, a dedicated pipeline release signal to the gateway indicating release of dedicated pipeline between the bridging device and the gateway.
- the method further includes: receiving, by the processor, a request from the sensor to send proprietary data, prior to determining whether the dedicated pipeline is available.
- screening all communication on the data bus includes: masking, by the processor, addresses of incoming data traffic prior to forwarding the data traffic to the sensor, or sending, by the processor, a signal of all zeroes to the sensor instead of the incoming data traffic.
- the method further includes: in response to determining that the dedicated pipeline is not established with the bridging device: determining, by the processor, whether the dedicated pipeline has been released; discontinuing, by the processor, the screening of all communication on the data bus from the sensor; and resuming, by the processor, normal transmission of non- proprietary data to the gateway.
- determining that the dedicated pipeline is available includes: receiving, by the processor, a dedicated pipeline open signal from the gateway via the data bus.
- the transmitting of the proprietary data includes: requesting, by the processor, the proprietary data from the sensor; receiving, by the processor, the proprietary data from the sensor; queueing, by the processor, the proprietary data in a queue; transmitting, by the processor, the queued proprietary data to the gateway via the dedicated pipeline; and receiving, by the processor, a proprietary data received signal from the gateway indicating receipt of transmitted data.
- the transmitting of the proprietary data further includes: clearing, by the processor, the queue of the queued proprietary data; and requesting, by the processor, more proprietary data from the sensor.
- the transmitting of the dedicated pipeline release signal is in response to one or more of: determining, by the processor, that all proprietary data at the sensor has been successfully sent to the gateway;
- the proprietary data includes diagnostic and/or troubleshooting data corresponding to an internal operational of the sensor.
- a method for traffic management of proprietary data in a network system including a gateway and a sensor node communicatively coupled to the gateway via a data bus, the method including: determining, by a processor of the gateway, whether there is an active connection to a remote server; and in response to determining that there is the active connection to the remote server: broadcasting, by the processor, availability of a dedicated pipeline for transmission of proprietary data to the gateway via the data bus; determining, by the processor, whether a request for the dedicated pipeline is received from the sensor node; and in response to determining that the request for the dedicated pipeline is received from the sensor node within a set period of time: broadcasting, by the processor, on the data bus, by the processor, a dedicated pipeline reserved signal indicating establishment of the dedicated pipeline between the gateway and the sensor node; determining, by the processor, whether the proprietary data has been received from the sensor node; and in response to determining that the proprietary data has been received: transmitting, by the processor,
- the method further includes: in response to determining that the request for the dedicated pipeline is not received from the sensor node within the set period of time: broadcasting, by the processor,
- determining that the communication timer has expired broadcasting, by the processor, a dedicated pipeline closed signal to the sensor node via the data bus, the dedicated pipeline closed signal indicating to the sensor node that the dedicated pipeline is no longer available; and in response to determining that the
- the method further includes: prior to determining whether there is an active connection to the remote server, broadcasting, by the processor, a dedicated pipeline closed signal to the sensor node via the data bus, the dedicated pipeline closed signal indicating resumption of normal data transfer operations.
- the method further includes: transmitting, by the processor, the proprietary data to the remote server.
- the transmitting of the proprietary data includes: determining, by the processor, whether there is an existing queue of proprietary data to transmit to the remote server; and in response to determining that there is an existing queue of proprietary data: transmitting, by the processor, the queue of proprietary data to the remote server.
- the method further includes, in response to determining that there is an existing queue of proprietary data: receiving, by the processor, an acknowledgment of transmission from the remote server; and clearing, by the processor, the existing queued data.
- FIG. 1 is a block diagram of a commercial vehicle including the smart trailer system, according to some exemplary embodiments of the invention.
- FIG. 2 is a block diagram of a trailer sensor network in communication with the master controller, according to some exemplary embodiments of the present invention.
- FIG. 3 is a schematic diagram of a SIB facilitating communication between the master controller and a sensor, according to some exemplary embodiments of the present invention.
- FIG. 4 is diagram illustrating the fleet managing server in communication with the STS and one or more end user devices, according to some embodiments of the present invention.
- FIG. 5 illustrates a network system according to some exemplary embodiments of the present invention.
- FIGS. 6A-6C illustrate a process of sending proprietary data from a sensor of the network system to a remote server via a dedicated pipeline, as performed by a gateway of the network system, according to some exemplary embodiments of the present invention.
- FIG. 7 illustrates a process of sending proprietary data from a sensor to the gateway via a dedicated pipeline, as performed by a bridge device of the network system, according to some exemplary embodiments of the present invention
- FIG. 1 is a block diagram of a commercial vehicle including the smart trailer system 100, according to some exemplary embodiments of the invention.
- the commercial vehicle includes a tractor 10 and a trailer 20, which houses the smart trailer system (STS) 100.
- the STS 100 includes a sensor network 101 , which may include a plurality of sensors 102-1 , 102-2, ..., 102- n, and a master controller (e.g., a gateway or a sensor distribution module (SDM))
- a master controller e.g., a gateway or a sensor distribution module (SDM)
- the STS 100 further includes a wireless communication module (e.g., a cellular modem/transceiver 106 and/or a wireless transceiver 135) for transmitting the sensor network data to a fleet monitoring server (also referred to as a fleet managing server) 30 that manages the associated trailer fleet, over a communications network (e.g., a cellular network) 40, for further processing and analysis.
- the server 30 may manage the data generated by the sensor network 101.
- One or more user devices 50 may be utilized to view and analyze the sensor network data.
- the STS 100 may provide trailer security, diagnostics, environmental monitoring, cargo analysis, predictive maintenance monitoring, telemetry data, and/or the like.
- FIG. 2 is a block diagram of a trailer sensor network 101 in communication with the master controller 104, according to some exemplary embodiments of the present invention.
- the master controller 104 serves as the gateway that manages the network 101 and all communications to and from the fleet monitoring server 30.
- a plurality of sensor interface boards (SIBs) 110 are communicatively coupled to the master controller 104 via a data bus (e.g., a serial controller area (CAN) bus) 112.
- a data bus e.g., a serial controller area (CAN) bus
- CAN serial controller area
- Each SIB 110 monitors and controls one or more local sensors and actuators installed at various locations within the trailer 20.
- the sensors 102 of the STS 100 may be coupled to the master controller 104 via a SIB 110 on the data bus 112 (e.g., as is the case with the sensors 102-1 to 102-n of FIG. 2) or directly via a bus interface adapter (e.g., a CAN bus interface adapter, as is the case with sensor 102-i of FIG. 2).
- a bus interface adapter e.g., a CAN bus interface adapter, as is
- every SIB 110 is illustrated as being connected to a sensor 102 and an actuator 108 (e.g., 108-1 , 108-2 ... 108-n), embodiments of the present invention are not limited thereto.
- each SIB 110 may be coupled to one or more sensors 102 and/or one or more actuators 108.
- the master controller 104 includes an onboard microcontroller (e.g., a central processing unit (CPU)) 120, which manages all functions of the master controller 104 including self-tests and diagnostics; a memory device (e.g., a volatile and/or non-volatile memory) 122 for storing the data collected from the sensors 102 as well as firmware, operational and configuration data of the master controller 104; a bus transceiver 124 for interfacing with the SIBs 110 and any directly connected sensors 102 via the data bus 112; and a power management unit (PMU) 128 for generating all operating voltages required by the STS 100. While the embodiments of FIG. 2 illustrate the PMU 128 as being part of the master controller 104, embodiments of the invention are not limited thereto. For example, the PMU 128 may be external to the master controller 104 (e.g., as shown in FIG. 1 ).
- the PMU 128 may be external to the master controller 104 (e.g., as shown in FIG. 1
- the master controller 104 ensures that the data in the memory 122 is preserved under conditions including loss of power, system reset, and/or the like.
- the memory 122 may have sufficient capacity to store a minimum of two weeks of data locally.
- the microcontroller 120 may retrieve the requested data from the memory 122 and send it to the server 30 via the cellular modem 126 and/or the WiFi transceiver 135. The microcontroller 120 may also delete data from the memory 122 upon receiving a delete data request from the server 30.
- the PMU 128 may receive a DC voltage (e.g., a fixed DC voltage) from the tractor 10 (e.g., the tractor power 142 as shown in FIG. 1 ) via an electrical cable (e.g., a 7-way or 15-way tractor connector), and may utilize it to generate the regulated voltage(s) (e.g., the regulated DC voltage(s)) used by the master controller 104 and the other components in the STS 100.
- the PMU 128 may include protection circuits for preventing damage to the STS 100 in the event of power surges (e.g., a load dump), overcurrent, overvoltage, reverse battery connection, and/or the like.
- the PMU 128 includes a backup battery 129 for providing power to the STS 100 in the absence of tractor power.
- a backup battery 129 for providing power to the STS 100 in the absence of tractor power.
- the backup battery 129 may have sufficient capacity to power operations of the STS 100 for a minimum of 48 hours without an external power source (e.g., without the tractor power 142) and/or solar panel 140.
- the PMU 128 may also receive electrical power from auxiliary power sources 140, such as solar panels that may be installed on the trailer 20, an onboard generator, an onboard refrigerator (e.g., refrigerator battery), and/or the like.
- auxiliary power sources 140 such as solar panels that may be installed on the trailer 20, an onboard generator, an onboard refrigerator (e.g., refrigerator battery), and/or the like.
- the PMU 128 monitors each source and selects which power source to utilize to power the master controller 104 and the STS 100 as a whole.
- the PMU 128 provides status
- the PMU 128 may generate an alert when any of the above power parameters are outside of normal operating ranges.
- the PMU 128 may perform a discharge test on the backup battery 129, which allows the STS 100 to compare the discharge profile of the backup battery 129 to that of a new battery, and determine an estimate of the remaining battery life.
- the PMU 128 acts as the interface between the microcontroller 120 and the air brake lock system 138 (i.e. , the trailer’s emergency air brake system).
- the STS 100 is also capable of engaging the air brake lock system 138 for security purposes, such as when an unauthorized tractor connects to the trailer 20 and attempts to move it. Because the air brake lock system 138 is a safety related feature, the STS 100 has safeguards in place to ensure that the emergency brake does not engage while the trailer 20 is in motion. For example, the master controller 104 prevents the air brake lock system 138 from engaging the emergency brake when the trailer 20 is in motion.
- the air brake lock system 138 includes a pressure sensor 102-1 , which monitors the brake system air pressure, and an air brake actuator 108-1 for engaging and disengaging the air line to the emergency brake system.
- the master controller 104 includes a cellular modem 126 for providing a wireless communication link between the STS 100 (e.g., the master controller 104) and the fleet monitoring server 30.
- the cellular modem 126 may be compatible with cellular networks such as 4G and/or LTE networks.
- the cellular modem 126 may facilitate over-the-air updates of the master controller 104. While the embodiments of FIG. 2 illustrate the cellular modem 126 as being part of the master controller 104, embodiments of the invention are not limited thereto.
- the cellular modem 126 may be external to the master controller 104 (as, e.g., shown in the FIG. 1 ).
- the master controller 104 may also include one or more of a USB controller 130, an Ethernet controller 132, and a WiFi controller 134.
- the USB and Ethernet controllers 130 and 132 may allow the mater controller 104 to interface with external components via USB and Ethernet ports 131 and 133, respectively.
- the WiFi controller 134 which includes a wireless transceiver 135, may support communication between authorized users (e.g., a driver or maintenance personnel) and the fleet managing server 30 via the cellular modem 126.
- the WiFi transceiver 135 may be mounted in a location at the trailer 20 that ensures that communication can be maintained from anywhere within a radius (e.g., 100 feet) of the center of the trailer 20.
- the master controller 104 also includes a Bluetooth®/Zigbee® transceiver 127 for communicating with wireless sensor nodes (i.e. , those sensors that are not connected to the data bus 112) within the trailer 20.
- a Bluetooth®/Zigbee® transceiver 127 for communicating with wireless sensor nodes (i.e. , those sensors that are not connected to the data bus 112) within the trailer 20.
- an auxiliary wireless transceiver that is
- independent of the WiFi controller 134 may be mounted to the trailer 20 as part of the STS 100 in order to perform regular self-test of the WiFi system supported by the WiFi controller 134.
- the master controller 104 provides an idle mode, which reduces operating power by suspending operation of all peripherals components (e.g., all sensors and actuators).
- the master controller 104 can enter into sleep mode, which substantially reduces or minimizes operating power by placing each component of the master controller 104 into its lowest power mode.
- the firmware of the master controller 104 may be updated wirelessly through the cellular modem 126 (as an over-the-air update) or the WiFi transceiver 134, and/or may be updated via a wired connection through, for example, the USB controller 130 or the Ethernet controller 132.
- the master controller 104 is coupled to an access terminal (e.g., an external keypad/keyboard) 136, which allows authorized users, such as drivers and maintenance personnel, to gain access to the STS 100. For example, by entering an authentication code the master controller 104 may perform the functions associated with the code, such as unlock the trailer door or put the trailer in lockdown mode.
- the master controller 104 may include an RS-232 transceiver for interfacing with the access terminal 136.
- the access terminal 136 may be attached to an outside body of the trailer 20.
- the STS 100 includes a global positioning system (GPS) receiver for providing location data that can supplement the data aggregated by the sensor network 101.
- GPS global positioning system
- the GPS receiver may be integrated with the master controller 104 or may be a separate unit.
- each time power is first applied to the master controller 104 e.g., when the operator turns the ignition key or when the STS 100 is activated
- an external command e.g., a diagnostic request
- the master controller 104 performs a self-check or diagnostic operation in which the master controller 104 first checks the status of each of its components (e.g., the PMU, RS-232 interface, Ethernet controller, etc.) and then checks each element (e.g., sensor 102 or SIB 110) attached to the data bus 112.
- the master controller 104 then may send an alert command to the fleet monitoring server 30 when any component or element has a faulty status.
- the alert command may include the status data of all elements attached to the data bus 112.
- the master controller 104 also communicates with the PMU 128 to determine the source of input power as, for example, tractor power 142 or battery backup 129. Once the self-check operation is concluded, the master controller 104 commences normal operation during which the master controller 104 may periodically or continuously receive sensory data from the sensors 102 and send the corresponding data packages to the fleet monitoring server 30 at a set or predetermined rate. In some examples, the rate of information transmission by the master controller 104 may be variable depending on the power state of the STS 100 (e.g., depending in whether the STS 100 is in idle mode, sleep mode, normal operation mode, etc.).
- the master controller 104 may send a variety of commands to the fleet managing server 30 that may include an STS status command, which is utilized to send STS status (e.g., self-test results, operating mode, etc.) to the fleet managing server 30; an alert/fault command, which is utilized to send alerts to the server 30 (e.g., based on the detection of STS faults and/or trailer events that trigger alert settings); SDM data command, which is used to send the measured data aggregated from the sensor network 101 ; a configuration alert, which is utilized to notify the fleet managing server 30 when STS configuration is modified; and STS access alert, which is utilized to notify the fleet managing server 30 when a user (e.g., a driver or a maintenance operator) attempts to access the STS 100 via WiFi (i.e.
- STS status command which is utilized to send STS status (e.g., self-test results, operating mode, etc.) to the fleet managing server 30
- an alert/fault command which is utilized to send alerts to the server 30 (e
- the master controller 104 is capable of setting and dynamically adjusting the data rate from each sensor (e.g., the pace at which measurements are made) independent of other sensors (e.g., may do so through the corresponding SIB 110).
- each sensor interface board (SIB) 110 manages an assigned set of one or more sensors 102. Some nodes may also manage one or more actuators 108. Each sensor 102 may translate a physical property, such as heat, mechanical motion, force, light, and/or the like, into a corresponding electrical signal. Each actuator 108 is configured to produce an associated mechanical motion when activated (e.g., when an activation voltage is applied to it), and to return to its idle/original position when deactivated (e.g., when the activation voltage is removed).
- the SIB 110 includes a SIB controller 150 (e.g., a programmable logic unit), a SIB power manager 152, a serial interface 154, and onboard SIB memory 156.
- the SIB controller 150 is configured to manage the operations of the SIB 110 and to facilitate communication between the master controller 104 and any sensors 102 and/or actuators 108.
- the SIB power manager 152 includes an onboard power conversion which converts the system voltage received from the master controller 104 into the required operating voltages for the SIB circuitry as well as the voltages utilized by sensor(s) 102 and any actuator(s)
- the SIB 110 is also coupled to a 3-axis accelerometer 103-1 , a temperature sensor 103-2, and a light sensor 103-3.
- the sensors 103-1 to 103-3 may be integrated with the SIB 110 or may be external to the SIB 110.
- the sensors 102 may include, for example, a wheel speed sensor, one or more tire pressure sensors (TPSs), one or more wheel-end and wheel bearing temperature sensors, a smoke detector, a humidity sensor, one or more vibration detectors, an odometer/speedometer, one or more axle hub sensors, one or more brake wear sensors, a position sensor (e.g., a magnetic position sensor), a digital microphone, and/or the like.
- the odometer/speedometer may go on every tire, or may be on a dedicated tire from which this information is taken; and a brake stroke sensor and brake/wheel-end temperature sensors may be on each brake pad/wheel end.
- Door open detection may be facilitated by a position sensor (e.g., a magnetic position sensor) and/or the like.
- the SIB 110 (e.g., the SIB controller 150) may be configured to (e.g., programmed to) be compatible with the SIB 110 (e.g., the SIB controller 150) may be configured to (e.g., programmed to) be compatible with the SIB 110 (e.g., the SIB controller 150)
- the SIB 110 may provide an idle mode that reduces operating power by suspending operation of all peripherals (e.g., all sensors 102/103 and actuators 108). Additionally, the SIB 110 provides a sleep mode which reduces operating power to the minimum achievable level by placing each circuit on the SIB 110 and all peripherals into their lowest power mode. Idle and sleep mode may be activated and deactivated through a command from the master controller 104.
- the SIB 110 may prompt the sensors 102/103 to make measurements at a predetermined pace, which is configurable through the master controller 104.
- Measured data is then stored at the SIB memory 156 for transmission to the master controller 104.
- the SIB 110 may enter idle mode in between measurements.
- the master controller 104 together with the SIB 110 provide a plug-and-play sensory and telemetry system allowing for sensors and/or actuators to be removed from or added to the STS 100 as desired, thus providing an easily (re)configurable system.
- the shared data bus 112 may include a plurality of conductors for carrying power and data.
- a sensory node including a SIB 110 and one or more sensors 102 may branch off of the communication bus 112 using a T-connector or junction box 113, which facilitates the connection of the sensory node to the shared communication bus 112 via a bus extension 115.
- the bus extension 115 may include the same conductors as the shared communication bus 112, and the T-connector 113 may electrically connect together corresponding conductors of the shared communication bus 112 and the bus extension 115.
- temperatures ranging from about -50 to about +100 degrees Celsius.
- FIG. 4 is a diagram illustrating the fleet managing server 30 in
- the fleet managing server 30 may be in
- Communications between the fleet managing server 30, the STS 100, and an end user device 50 may traverse a telephone, cellular, and/or data communications network 40.
- the communications network 40 may include a private or public switched telephone network (PSTN), local area network (LAN), private wide area network (WAN), and/or public wide area network such as, for example, the Internet.
- PSTN public switched telephone network
- LAN local area network
- WAN private wide area network
- the communications network 40 may also include a wireless carrier network including a code division multiple access (CDMA) network, global system for mobile communications (GSM) network, or any wireless network/technology conventional in the art, including but not limited to 3G, 4G, LTE, and the like.
- CDMA code division multiple access
- GSM global system for mobile communications
- the user device 50 may be communicatively connected to the STS 100 through the communications network 40 (e.g., when the user device 50 has its own 4G/LTE connection). In some examples, the user device 50 may communicate with the STS 100 and the fleet managing server 30 through the WiFi network created by the wireless transceiver 134 of the STS 100, when within WiFi range.
- the fleet managing server 30 aggregates a variety of telematics and diagnostics information relating to each specific trailer in the fleet and allows for the display of such information on an end user device 50 or an operator device 31 through a web portal.
- the web portal of the fleet managing server 30 may allow the operator to administer the system by designating authorized personnel who may access and use the STS 100, as well as drivers and maintenance personnel who are authorized to move and/or maintain the trailers in the fleet.
- the fleet managing server 30 provides, through its web portal, a comprehensive fleet management system by integrating system administration tools, telematics information, and trailer status information.
- the web portal may provide a set of screens/displays that allow the operator to easily view summary information relating to the fleet of assets being managed.
- the web portal may also provide a set of screens/displays which allow the operator to view lower levels of detail related to various elements of the fleet. Such information may be presented in a pop-up, overlay, new screen, etc.
- the fleet managing server 30 includes a system administration server 32, a telematics server 34, an analytics server 36, and a database 38.
- the system administration server 32 may provide system administration tools that allow operators to manage access to the fleet system and set the configurations of the fleet system. Access management allows the operator to create and maintain a database of users who are authorized to access and exercise assigned functions of the system. For example, an individual may be designated as the administrator and have access to all aspects of the web portal, and another individual may be designated as a driver or a maintenance technician and be granted a more restricted and limited access to the features of the web portal.
- the telematics server 34 may provide location-related information relative to each asset (e.g., each STS 100) in the fleet.
- the telematics information includes geographic location, speed, route history, and other similar types of information which allow the fleet manager to understand the geographic history of a given asset.
- the analytics server 36 may provide trailer status information related to data collected from sensors and systems located on the STS 100 of the trailer itself. This information may provide a dynamic image of the critical systems on a given trailer, such as tire pressure, brakes, cargo temperature, door/lock status, etc.
- the analytics server 36 may analyze sensory and telematics data received from each STS 100 of a fleet and provide a variety of information to the fleet operator, including an organized list of alerts based on severity and category for each STS 100 or the entire fleet; a percentage of the fleet that is in use; a
- Driver information may include the driver’s identification number, most current assignment, a list of all events of excessive speed, a list of all events of excessive G-force due to braking or high-speed turning, a list of all excessive ABS events, and the like.
- Trailer status and configuration may include information such as odometer reading, a list of all components installed on a trailer and the status thereof, pressure of each tire, brake status, ABS fault, light out (faulty light) status, axle sensory information, preventive maintenance summary, present speed and location, self-test/diagnostic parameters, pace of sensor measurements, available memory capacity, date of last firmware update, history of data communications, battery capacity, all parameters related to power management (e.g., voltages, currents, power alerts, etc.), and/or the like.
- power management e.g., voltages, currents, power alerts, etc.
- the data generated by and consumed by each of the servers 32, 34, and 36 may be respectively stored in and retrieved from the database 38.
- the fleet managing server 30 may also allow control over various aspects of an STS 100. For example, upon invocation by an operator, the fleet managing server 30 may send a command signal to the STS 100 to initiate a self-test by the master controller 104, initiate capture and transmission of all sensor data, activation or release of door locks, activation or release of the air lock, and/or the like. [0082] The analytics server 36 may also issue a number of alerts, based on the analyzed data, which may be pushed to the operator device 31.
- alerts may include a break-in alert, when the proximity detector mounted on the door indicates a door-open status; unauthorized tractor alert, when the STS 100 detects airline and/or 7-way connector connections and a proper authorization code is not received via WiFi transceiver 135 and/or the local keypad 136; stolen trailer alert, when the air lock is engaged and the sensors detect trailer motion; brake tamper alert, when the air lock is bypassed or the cable to the air lock from the master controller 104 is cut; tire pressure alert, when a tire pressure is outside of the specified range; brake lining alert, when the brake sensor indicates that a brake lining is outside of the specified range; hub fault alert, when the hub sensor indicates that hub conditions are outside of the specified range; SIB fault self-test alert, when a self-test is run on a SIB 110 and the results indicate a fault; sensor fault alert, when a self-test is run on a sensor and the results indicate a fault; data bus fault self-test alert, when a self-test is run
- the mobile application 52 on the end user device 50 allows the user to enter an authentication code to log in to the STS 100 system (e.g., upon verification by, and permission from, the system
- Configuration of the mobile app 52 on a given device 50 may be based upon the authenticated user’s access level (e.g., a truck driver may have access to one set of features, while an installation/maintenance person may have access to a different set of features).
- the mobile app 52 may be capable of providing access to historical data stored in the STS local memory 12, allowing authorized users to run a scan of all elements in the STS 100 and to run diagnostics on the STS 100 (i.e. , run a self-check diagnostic routine), displaying an alert (visual and auditory) when an alert is received from the STS 100 (the alert may be routed through the analytics server 36 or be directly received from the STS 100).
- FIG. 5 illustrates a network system 200 according to some embodiments of the present invention.
- a network system 200 includes a plurality of sensors 202-1 to 202-n (where n is an integer greater than 1 ) communicatively connected to a gateway 204 via a data bus 212 (also referred to as a system bus or common bus), and a plurality of bridge devices 212-1 to 212-n electrically coupled between the data bus 212 and plurality of sensors 202-1 to 202-n.
- each bridge device 210 may act as an intermediary between the sensor 202 and the gateway 204.
- the combination of a sensor 202 and its associated bridge device 210 may be referred to as a sensor node.
- the sensor 202 and the data bus 212 may be same or substantially the same as the sensor 102 and the data bus 112, respectively.
- the gateway 204 may facilitate communication between the sensors 202, which collect data (e.g., sensory and proprietary data), and a remote server 30, which collects and analyzes the data.
- the gateway 204 may be the same or substantially the same as the master controller 104 described above with reference to FIGS. 1-4.
- the gateway 204 may include all of the components and functionality of the master controller 104; however, embodiments of the present invention are not limited thereto.
- the gateway 204 may not include all of the components of the master controller 104.
- the gateway 204 includes a processor 220, a gateway memory 222, a bus transceiver 224, which may be the same as or substantially the same as the CPU 120, the memory 122, and the bus transceiver 124 of the master controller 104.
- the gateway 204 further includes a wireless transceiver 226 that enables wireless communication with the remote server 30.
- the wireless transceiver 226 may include the cellular modem 126 and/or the wifi controller 134 of the master controller 104.
- the bridge device 210 may act as an intermediary device that facilitates communication between the sensor 202 to which it is attached and the gateway 204.
- the bridge device 210 may be the same or substantially the same as the SIB 110 described above with reference to FIGS. 1-4.
- the bridge device 210 may include all of the components and functionality of the SIB 110;
- the bridge device 210 may not include all of the components of the SIB 110.
- the bridge device 210 includes a bridge controller 250, a bridge memory 252, and a bus interface 254.
- the bridge controller 250 and the bus interface 254 may be the same as or substantially the same as the SIB controller 150 and the serial interface 154 of the SIB 110.
- the bridge memory 252 may store data collected by the sensor 202.
- the sensors 102 may collect several types of data including sensed data and proprietary data.
- Sensed data may include a measurement of an external physical property/parameter, such as temperature, speed, acceleration, voltage, electrical current, etc.
- Proprietary data also referred to as“raw data” may include information pertaining to internal operations of the sensor 102, such as diagnostic and troubleshooting data that a manufacturer of the sensor 102 may be able to use to debug and/or improve the sensor 102.
- Proprietary information may be collected far less frequently than sensed data. For example, while a sensor may collect sensed data at a rate of about 100k/s, proprietary information may be collected about once every 5 to 10 seconds.
- the sensor 102 may tag internal operational information as proprietary since, in some instances, a competing sensor
- manufacturer may be able to reverse engineer a product by eavesdropping on this information as it is being transmitted over the data bus 112.
- each bridge device 210 blocks or allows data traffic to pass to the associated sensor 102 based on the sensitivity of the data.
- the other bridge devices 210 i.e. , those that do not correspond to the sensor 102-i
- the other bridge devices 210 block the other sensors 102 (i.e., all sensors except 102-i) on the network from being able to eavesdrop on the proprietary data being transmitted.
- a bridge device 210 may act as a pass-through device except when proprietary data is being broadcast on the data bus 112.
- the gateway 204 may broadcast a signal (e.g., a dedicated pipeline open signal) on the data bus 212 to indicate, to all bridge devices 210 and sensors 202, the possibility of establishing a dedicated pipeline for the purpose of transmitting sensitive proprietary information.
- the gateway 204 established the dedicated pipeline when a sensor 202 indicates that it has some proprietary data to transmit. While the dedicated pipeline is open (or established), no other sensor 202 may transmit data to, or receive data from, the data bus 212, as the data bus 212 is being used to transfer proprietary information. Once the dedicated pipeline is closed, all sensors 102 can resume the transfer of non-proprietary data (e.g., sensed data) via the data bus 212.
- the ratio of time devoted to transmitting proprietary data through the dedicated pipeline over time devoted to transmitting other date may be preset or may be adjustable based on priorities of the system. For example, if the network system is a critical control system, the majority of data transmitted may be proprietary.
- FIGS. 6A-6C illustrate a process S300 of sending proprietary data from a sensor 202 to the server 30 via a dedicated pipeline, as performed by the gateway 204, according to some embodiments of the present invention.
- FIGS 6A-6B illustrate the process of receiving and queueing proprietary data from a bridge device 210 corresponding to the sensor 202
- FIG. 3C illustrates the process of transmitting the queued data to the server 30.
- the gateway 204 indicates, to all of the bridge devices 210-1 to 210-n on the data bus 212, the availability of a dedicated pipeline to upload any proprietary data to the remote server (e.g., a cloud server) 30 (S302) by, for example, broadcasting a dedicated pipeline open signal on the data bus 212.
- the gateway 204 may do so after determining that there is an active connection (e.g., an upload link) to the remote server 30.
- the gateway 204 then waits to receive a dedicated pipeline request from a bridge device 210 (S304), which indicates that the bridge device 210 is requesting to initiate a dedicated (e.g., private) link to the server 30 for purpose of sending proprietary data. If, after a passage of a set period of time (e.g., a configurable period of time, which may, in some examples, be about 0.5 seconds to about 5 seconds), a dedicated pipeline request signal is not received, the gateway 204 checks whether an active link/connection (e.g., an active wifi and/or cellular connection) exists between gateway 204 and the server 30 (S306). If an active link is present, the gateway 204 once again checks for the presence of the dedicated pipeline request signal. But if an active link is not present, the gateway 204 proceeds to broadcast closure of the dedicated pipeline (S307) by, for example, broadcasting a dedicated pipeline closed signal on the data bus 212.
- a set period of time e.g., a configurable period of time, which may, in some examples
- the gateway 204 detects a dedicated pipeline request signal from a bridge device 210 having a first address (e.g., a first bus address)
- the gateway transmits a dedicated pipeline reserved signal on the data bus 212, which confirms that the dedicated link has been established between the server 30 and the bridge device 210 (S308).
- the dedicated pipeline reserved signal includes a destination address that matches that of the first address of the bridging device 210 requesting the dedicated link.
- the dedicated pipeline reserved signal also indicates to all other bridge devices 210, which do not match the destination address, that a dedicated link to the server 30 is not available at this time.
- arbitration may occur between the two or more sensors and one sensor will win according to the protocol of the data bus 212 (e.g., CANBUS or the like).
- the gateway 204 then establishes the dedicated pipeline with the winning sensor 202.
- the winning sensor is the sensor that responds first to the call for establishing a dedicated pipeline or may be the sensor with the higher priority (e.g., as defined by the bus protocol).
- the gateway 204 then initiates and starts the communication timeout timer (S310), and waits to receive proprietary data from the bridge device 210 (e.g., the winning bridge device associated with the winning sensor).
- the proprietary data may include proprietary/raw sensor data, which is to be sent via the dedicated link to the server 30.
- the gateway 204 determines whether proprietary data has been received from the bridge device 210 or not (S312). If not, the gateway 204 checks whether the bridge device 210 has released the dedicated pipeline by, for example, checking whether a dedicated pipeline release signal has been received from the bridge device 210 (S314).
- the bridge device 210 when all proprietary data stored at the bridge device 210 has been transmitted to the gateway 204, the bridge device 210 indicates completion of transfer by transmitting the dedicated pipeline release signal to the gateway 204. If a dedicated pipeline release signal has been received, the gateway 204 proceeds to broadcast closure of the dedicated pipeline (S315) by, for example, broadcasting a dedicated pipeline closed signal on the data bus 212. If a dedicated pipeline release signal has not been received and a the timeout timer has not expired (S316), the gateway 204 continues to listen for proprietary data (S312). Otherwise, the gateway 204 proceed to broadcast closure of the dedicated pipeline (S315) by, for example, broadcasting a dedicated pipeline closed signal on the data bus 212.
- the timeout timer ensures that even if the bridging device 210 experiences a failure, and hence fails to send any (further) proprietary data or a dedicated pipeline release signal, the network system 200 does not lockup
- the gateway 204 may proceed to transfer the queued proprietary data to the remote server 30, which is further described with reference to FIG. 3C below.
- the gateway 204 When the gateway 204 receives the proprietary data from the bridging device 210 requesting the dedicated link, the gateway 204 queues the proprietary data in the memory 222 and transmits the queued proprietary data to the server 30 at the transmission rate of the active link (e.g., the active wifi and/or cellular connection). Every time proprietary data is received from the bridging device 210, the gateway 204 confirms receipt of the proprietary data by, for example, transmitting a proprietary data received signal on the data bus 212 (S320). The gateway 204 may also restart the communication timeout timer (S322) to ensure that proprietary data transmission from the bridge device 210 is not prematurely terminated due to the timeout timer running out.
- the communication timeout timer S322
- the gateway 204 broadcasts a dedicated pipeline closed signal on the data bus to indicate to all bridge devices 210 that a dedicated link to the server 30 is no longer available (e.g., S307/S315).
- the gateway 204 attempts to upload/transmit any queued data to the server 30 once an active link with the server 30 is established.
- the dedicated pipeline closed signal also indicates the resumption of normal data transfer operations to all bridge devices 210. That is, sensors 202 may resume transmission of non-proprietary data (e.g., sensor data) to the gateway 204 according to bus protocol (e.g., CANBUS).
- bus protocol e.g., CANBUS
- the gateway 204 determines whether an active link (e.g., an active wifi and/or cellular connection) exists between gateway 204 and the server 30 (S324). If an active link is not present, the server 30 continues to check for an active link until one is found.
- an active link e.g., an active wifi and/or cellular connection
- the gateway 204 determines that an active link is present, the gateway 204 checks whether there is a queue of proprietary data in memory 222 for transmission to the server 30 (S326).
- the queued data may be leftover data from a previous (failed) attempt to send data to the server 30. For example, when there is a failure and the memory 222 contains leftover data that were not successfully sent, after restart, the gateway 204 may attempt to retransmit the data.
- the gateway 204 If there is no queued proprietary data, the gateway 204 returns to broadcasting a dedicated pipeline open signal on the data bus (S302). Flowever, if queued proprietary data exists, the gateway 204 transmits the queued proprietary data to the server 30 through the active link (S328).
- the gateway 204 then checks whether it has received an
- the gateway 204 deletes the queued proprietary data from memory 222 to clear memory space for further incoming data (S332). Otherwise, if an acknowledgment is not received, the gateway 204 attempts to retransmit the queued proprietary data by checking again whether an active link is present (S324) and retransmitting the queued data if present (S326 and S328).
- all other bridge devices 210 i.e. , all bridge devices except 204-i
- the data bus 212 screen their corresponding sensors 202 (i.e. , all sensors except 202-i) from any communication between the gateway 204 and the bridge device 204-i.
- the signals including the dedicated pipeline request signal, the dedicated pipeline reserved signal, proprietary data signal, the proprietary data received signal, and the dedicated pipeline release signal are transmitted on the data bus 212, all other bridge devices 210 screen their signals including the dedicated pipeline request signal, the dedicated pipeline reserved signal, proprietary data signal, the proprietary data received signal, and the dedicated pipeline release signal are transmitted on the data bus 212, all other bridge devices 210 screen their
- a bridging device 210 may conceal the transmission of proprietary data on the data bus 212 by sending a screen signal to the corresponding sensor 202.
- the screen signal may be all zeroes, all ones, or any other suitable set of values.
- the bridge device 210 may mask the address of the data traffic by replacing it with the gateway address prior to passing it onto the sensor 202.
- bridge devices may operate as gatekeepers that segregate all other sensors from the data bus 212 and prevent/block them from receiving proprietary communication of the sensor transmitting the proprietary information.
- FIG. 7 illustrates a process S400 of sending proprietary data from a sensor 202 to the gateway 204 via a dedicated pipeline, as performed by the bridge device 210, according to some embodiments of the present invention.
- the bridge device 210 screens/conceals all traffic on the data bus 212 from the sensor 202 until it detects that the dedicated pipeline has been released, for example, as a result of receiving a dedicated pipeline release signal from the gateway 204 (S409). Once the dedicated pipeline is released, the bridge device 210 can resume normal operation and, for example, send non-proprietary data (e.g., sensed data) to the gateway 204 via the data bus 212. The bridge device 210 may also return to checking for an available dedicated pipeline (S404). The non-acceptance (i.e.
- denial) of the request for the dedicated pipeline may be indicated by not receiving a dedicated pipeline reserved signal within a set or predetermined period of time (e.g., about 0.5 seconds to about 5 seconds) or by observing a dedicated pipeline reserved signal on the data bus 212 that is addressed to a different bridge device.
- a dedicated pipeline reserved signal within a set or predetermined period of time (e.g., about 0.5 seconds to about 5 seconds) or by observing a dedicated pipeline reserved signal on the data bus 212 that is addressed to a different bridge device.
- the bridge device 210 proceeds to initiate queuing of proprietary data from the sensor 202 for transmission to the gateway 204 (S410). At this point, the bridge device 210 determines whether any proprietary data has been received from the sensor 202 or not (S412). If no proprietary data has been received from the sensor 202 within a first period of time (e.g., about 1 second to about 120 seconds), or if the sensor 202 indicates that all proprietary data has already been sent, the bridge device 210 releases the dedicated pipeline (S414) by, for example, transmitting a dedicated pipeline release signal on the data bus 212.
- a first period of time e.g., about 1 second to about 120 seconds
- the bridge device 210 transmits the queued proprietary data to the gateway 204 via the dedicated pipeline (S416) by, for example, sending the proprietary data on the data bus 212.
- the bridge device 210 confirms whether the transmission of proprietary data was successful or not (S418). In so doing, the bridge device may determine if a proprietary data received signal was received from the gateway 204 or not. If so, the transmission was successful and the bridge device 210 clears its internal queue for proprietary data (S420) and returns to queueing more proprietary data from sensor (S416).
- the bridge device 210 releases the dedicated pipeline (S414).
- the period of time for waiting to receive a dedicated pipeline reserved signal, and the first and second periods of time may be configurable by a system administrator.
- some embodiments of the present invention allow sensors in a network to collect and send sensitive proprietary information over a
- the smart trailer and/or any other relevant devices or components according to embodiments of the present invention described herein may be implemented utilizing any suitable hardware, firmware (e.g., an application-specific integrated circuit), software, or a suitable combination of software, firmware, and hardware.
- the various components of the smart trailer may be formed on one integrated circuit (IC) chip or on separate IC chips.
- the various components of the smart trailer may be implemented on a flexible printed circuit film, a tape carrier package (TCP), a printed circuit board (PCB), or formed on a same substrate.
- the various components of the smart trailer may be a process or thread, running on one or more processors, in one or more computing devices, executing computer program instructions and interacting with other system components for performing the various functionalities described herein.
- the computer program instructions are stored in a memory which may be implemented in a computing device using a standard memory device, such as, for example, a random access memory (RAM).
- the computer program instructions may also be stored in other non-transitory computer readable media such as, for example, a CD- ROM, flash drive, or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Arrangements For Transmission Of Measured Signals (AREA)
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862635970P | 2018-02-27 | 2018-02-27 | |
PCT/US2019/019870 WO2019169013A1 (en) | 2018-02-27 | 2019-02-27 | Traffic management of proprietary data in a network |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3759952A1 true EP3759952A1 (en) | 2021-01-06 |
Family
ID=65818601
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19712332.6A Withdrawn EP3759952A1 (en) | 2018-02-27 | 2019-02-27 | Traffic management of proprietary data in a network |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP3759952A1 (en) |
CN (1) | CN111937417A (en) |
CA (1) | CA3092491A1 (en) |
MX (1) | MX2020008951A (en) |
WO (1) | WO2019169013A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111010682A (en) * | 2020-02-22 | 2020-04-14 | 南京凯奥思数据技术有限公司 | Method for transmitting wireless sensor data through ZigBee and WiFi |
DE102023202541A1 (en) | 2023-03-22 | 2024-09-26 | Zf Friedrichshafen Ag | Control connection between a tractor and a trailer |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100366019C (en) * | 2006-03-07 | 2008-01-30 | 南京澳帝姆科技有限公司 | Communication method between wireless sensor network node and gateway |
CN103676795B (en) * | 2012-09-06 | 2016-12-21 | 苏州联科盛世科技有限公司 | Intelligent monitor system based on technology of wireless sensing network |
WO2014148960A1 (en) * | 2013-03-22 | 2014-09-25 | Telefonaktiebolaget L M Ericsson (Publ) | Communication apparatus, control method thereof, and computer program thereof |
US20140376426A1 (en) * | 2013-06-20 | 2014-12-25 | Gary David Boudreau | Machine type communication aggregator apparatus and method |
US20160135109A1 (en) * | 2014-11-12 | 2016-05-12 | Qualcomm Incorporated | Opportunistic ioe message delivery via wan-triggered forwarding |
-
2019
- 2019-02-27 MX MX2020008951A patent/MX2020008951A/en unknown
- 2019-02-27 WO PCT/US2019/019870 patent/WO2019169013A1/en unknown
- 2019-02-27 CA CA3092491A patent/CA3092491A1/en not_active Abandoned
- 2019-02-27 EP EP19712332.6A patent/EP3759952A1/en not_active Withdrawn
- 2019-02-27 CN CN201980023091.1A patent/CN111937417A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2019169013A1 (en) | 2019-09-06 |
CN111937417A (en) | 2020-11-13 |
MX2020008951A (en) | 2020-12-07 |
CA3092491A1 (en) | 2019-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11912359B2 (en) | Smart trailer system | |
US10647369B2 (en) | Modular harness system | |
US11451957B2 (en) | Traffic management of proprietary data in a network | |
US5786998A (en) | Apparatus and method for tracking reporting and recording equipment inventory on a locomotive | |
US6847864B2 (en) | Vehicular communications system initializing abnormal control unit | |
CA2874503C (en) | Rental/car-share vehicle access and management system and method | |
JP5772666B2 (en) | Communications system | |
CN108512895B (en) | IoT-based remote control and monitoring system and method for vehicle | |
KR101593571B1 (en) | Black box apparatus for diagnosing error of electronic control unit for vehicle and control method thereof | |
WO2019169013A1 (en) | Traffic management of proprietary data in a network | |
KR101296672B1 (en) | Railway car communication network system | |
CN117873040A (en) | Remote monitoring and fault diagnosis method for industrial control main board and related equipment | |
JP6483461B2 (en) | Management method, management program, management device, management system, and information processing method | |
WO2019199990A1 (en) | Modular harness system | |
KR102059369B1 (en) | LoRA-based remote management system for security equipments | |
CN104890624A (en) | Anti-theft method and device for vehicle | |
JP6087562B2 (en) | Security system | |
US20230118448A1 (en) | Add-on module for manipulation protection of a sensor | |
WO2022075076A1 (en) | Vehicle log collection program, in-vehicle device, vehicle log collection system, and vehicle log collection method | |
CN110749032A (en) | Operation control method and device, air conditioner and storage medium | |
CN108394770A (en) | A kind of lift management system | |
KR20150069301A (en) | Diagnostic communication type setting method for electronic control unit in vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200928 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20221005 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20230721 |