04.10.2019
Posted by 

Sensor technology advances and future trends. (PDF Available). The design and manufacturing of automotive sensors and actuators has increasingly relied on microelectromechanical systems.

  1. Magnetism
  2. Neil M White
  3. John Turner Poker

Driver behavior impacts traffic safety, fuel/energy consumption and gas emissions. Driver behavior profiling tries to understand and positively impact driver behavior. Usually driver behavior profiling tasks involve automated collection of driving data and application of computer models to generate a classification that characterizes the driver aggressiveness profile. Different sensors and classification methods have been employed in this task, however, low-cost solutions and high performance are still research targets.

This paper presents an investigation with different Android smartphone sensors, and classification algorithms in order to assess which sensor/method assembly enables classification with higher performance. The results show that specific combinations of sensors and intelligent methods allow classification performance improvement.

1 Introduction Driver behavior strongly impacts traffic security and causes the vast majority of motor vehicle accidents. In 2010, the total economic cost of motor vehicle crashes in the United States was 242 billion. This figure represents the costs for approximately 33 thousand fatalities, 4 million nonfatal injuries, and 24 million damaged vehicles. Driver behavior adaptations might increase overall security and lessen vehicle fuel/energy consumption and gas emissions ,.

In this context, driver behavior profiling tries to better understand and potentially improve driver behavior, leveraging a safer and more energy aware driving. Driver monitoring and analysis or driver behavior profiling is the process of automatically collecting driving data (e.g., speed, acceleration, breaking, steering, location) and applying a computational model to them in order to generate a safety score for the driver.

Driving data collection may be achieved by several kinds of sensors, from the general ones in smartphones, to dedicated equipment such as monitoring cameras, telematics boxes, and On-Board Diagnostic (OBD) adapters. Modern smartphones provide sensors suitable to collect data for driver profile analysis. Previous work – shows that properly preprocessed and handled smartphone sensors data are an interesting alternative to conventional black boxes for the monitoring of driver behavior.

Driver behavior profiling relevance has grown in the last few years. In the insurance telematics domain, plans such as Usage-Based Insurance (UBI) or Pay-How-You-Drive (PHYD) make car insurance cheaper by rewarding drivers with good driving scores, instead of only considering group based statistics (e.g., age, gender, marital status) for that end. In the freight management domain, automated, continuous, and real-time driver behavior profiling enables managers to institutionalize campaigns aiming to improve drivers score, and, as a consequence, decrease accidents, and increase resource economy, and vehicle lifetime. Furthermore, notifications of unsafe driving events presented to drivers in real-time can help prevent accidents. For example, a smartphone app may notify the driver every time she performs an aggressive turn.

Several driver behavior profiling work – use a smartphone based sensor-fusion to identify aggressive driving events (e.g., aggressive acceleration, aggressive break) as the basis to calculate driver score. Another work uses vehicle sensor data to provide driving tips and assess fuel consumption as a function of driver profile. The machine learning algorithms (MLAs) employed in these papers come down to fuzzy logic or variations of Dynamic Time Warping (DTW). Dynamic Time Warping is an algorithm to find similar patterns in temporal series. It was originally employed in the speech recognition problem. We believe that other MLAs and sensor combination can be applied to the task of identifying aggressive driving events with promising results.

In this context, to the best of our knowledge, there is no work that quantitatively assesses and compares the performances combinations of smartphone sensor and MLAs in a real-world experiment. The main contribution of this work is to evaluate the performance of multiple combinations of machine learning algorithms and Android smartphone sensors in the task of detecting aggressive driving events in the context of a real-world experiment. We perform a data-collecting phase, driving a car, and gathered data from several different sensors while performing different maneuverers.

We present how the machine learning methods can be employed in the task and evaluate the accuracy of the combination of sensors and technique aiming to find the best match of sensor/technique to each class of behavior. The remainder of this paper is organized as follows. In Section 2 we present a comprehensive set of works that are related to our proposal, followed by Section 3, in which we present concepts of the employed techniques. Section 4 describes the methodology, presenting the data-gathering phase and the details of how we model the proposed machine learning application.

It is followed by results and discussion (Section 5). Finally, we conclude the paper presenting the conclusions and pointing out potential future work. 2 Related work In this section we describe recent driver behavior profiling work. It is worth noting that several driver behavior profiling solutions are commercially available mowadays, mostly in the insurance telematics and freight management domains. Examples include Aviva Drive , Greenroad (greenroad.com), Ingenie , Snapshot , and SeeingMachines.

However, technical details of these solutions are not publicly available. Nericell, proposed by Mohan et al. , is a Windows Mobile smartphone application to monitor road and traffic conditions. It uses the smartphone accelerometer to detect potholes/bumps, and braking events. It also employs the microphone to detect honking, and the GPS/global system of mobile (GSM) communications to obtain vehicle localization and speed. Braking, bumps and potholes are detected by comparing a set of empirically predefined thresholds to abrupt variations of accelerometer data or to their mean over a sliding window of N seconds.

Magnetism

No MLA is employed in the detection of such events. Some event detection results in terms of False Positives (FPs) and Fale Negatives (FNs) include: 4.4% FN, and 22.2% FP for breaking events; 23% FN, and 5% FP for bumps/potholes detection at low speed (. 3.1 Artificial neural networks Artificial Neural Networks (ANN) are composed by several computational elements that interact through connections with different weights. With inspiration in the human brain, neural networks exhibit features such as the ability to learn complex patterns of data and generalize learned information. The simplest form of an ANN is the Multi Layer Perceptron (MLP) consisting of three layers: the input layer, the hidden layer, and the output layer. Haykin states that the learning processes of an artificial neural network are determined by how parameter changes occur. Thus, the process of learning an ANN is divided into three parts: (i) the stimulation by extraction of examples from an environment; (ii) the modification of its weights through iterative processes in order to minimize ANN output error; and (iii) the network responds in a new way as a result of the changes that occurred.

Parameter configuration directly impacts on the process of learning an ANN. Some examples of parameters are: learning rate, momentum rate, stop criteria and form of network training. 3.2 Support vector machines Support Vector Machines (SVM) are a supervised learning method used for regression and classification. The algorithm tries to find an optimal hyperplane which separates the d-dimensional training data perfectly into its classes. An optimal hyperplane is the one that maximizes the distance between examples on the margin (border) which separates different classes. These examples on the margin are the so-called “support vectors”. Since training data is often not linearly separable, SVM maps data into a high-dimensional feature space though some nonlinear mapping.

In this space, an optimal separating hyperplane is constructed. In order to reduce computational cost, the mapping will be performed by kernel functions, which depend only on input space variables.

The most used kernel functions are: linear, polynomial, radial base function (RBF) and sigmoid. 3.3 Random forrest Random Forests (RF) are sets of decision trees that vote together in a classification. Each tree is constructed by chance and selects a subset of features randomly from a subset of data points. The tree is then trained on these data points (only on the selected characteristics), and the remaining “out of bag” is used to evaluate the tree. Random Forests are known to be effective in preventing overfitting. Proposed by Leo Breiman its features are: (i) it is easy to implement; (ii) it has good generalization properties; (iii) its algorithm outputs more information than just class label; (iv) it runs efficiently on large data bases; (v) it can handle thousands of input variables without variable deletion; and (vi) it provides estimates of what variables are important in the classification. 3.4 Bayesian networks According to Ben-Gal , Bayesian Networks (BNs) belong to the family of probabilistic graphical models.

These graph structures are used to represent knowledge about an uncertain domain. In particular, each node in the graph represents a random variable, while the edges between the nodes represent probabilistic dependencies among the corresponding random variables. Such conditional dependencies in the graph are often estimated using known statistic and computational methods.

Thus, Bayesian networks combine principles of graph theory, probability theory, and statistics. Sensor data are translated from the device coordinate system to Earth’s, in order to achieve device position independence. The metric we use to evaluate assembly performance for each driving event type is the area under the ROC curve (AUC) ,.

The AUC of a classifier ranges from 0.0 (worst) to 1.0 (best), but no realistic classifier should have an AUC less than 0.5 which is equivalent to random guessing. Hence, the closer an evaluation assembly AUC is to 1.0, the better it is at detecting a particular driving event type. In the remainder of this section, Subsection 4.1 presents the detailed evaluation assembly of machine learning and sensors. Subsection 4.2 describes the proposed attribute vector, used as input for the machine learning algorithms. Finally, Subsection 4.3 presents how we performd the data collection in a real-world experiment. 4.1 Evaluation assembly The sensor is the first element of the evaluation assembly. It represents one of the following Android smartphone motion sensors: accelerometer (Acc), linear acceleration (LinAcc), magnetometer (Mag), and gyroscope (Gyr).

The accelerometer measures the acceleration in meters per second squared ( m/ s 2) applied to the device, including the force of gravity. The linear acceleration sensor is similar to the accelerometer, but excluding the force of gravity. The magnetometer measures the force of the magnetic field applied to the device in micro-Tesla ( μT), and works similar to a magnet. The gyroscope measures the rate of rotation around the device’s axes in radians per second ( rad/ s). These sensors provide a 3-dimensional ( x, y e z) temporal series with nanoseconds precision in the standard sensor coordinate system (relative to the device).

The second element of the evaluation assembly is the sensor axis(es). Available values for this element are (i) all 3 axes; (ii) x axis alone; (iii) y axis alone; and (iv) z axis alone. For example, the accelerometer originates the following data sets: accelerometer (with data from all three axes), accelerometerx, accelerometery, and accelerometerz. The only exception to this rule is the magnetometer whose x axis values are always 0 or close after translated to Earth’s coordinate system. For that reason, there is no magnetometerx data set.

As we evaluate data from 3 sensors that originate 4 data sets, and 1 sensor that originates 3 data sets, there is a total of 3. 4 + 3 = 15 data sets. We separated sensor axes in distinct data sets to observe if any single axis would emerge as the best to detect a particular driving event type. The MLA is the third element of the evaluation assembly.

As detailed in Section 3, we evaluate the classification performance of MLP, SVM, RF, and BN MLAs. We used the WEKA (version 3.8.0) implementations of these algorithms in conjunction with LIBSVM library (version 3.17). We trained and tested these classifiers using 10-fold cross-validation in order to minimize overfitting. Algorithm configuration is the forth element of the evaluation assembly. We performed a parameter grid search to assess each algorithm with every possible combination of parameter values on.

We set most of the parameter values experimentally, and followed the guidelines provided in for SVM. We also used WEKA default values for parameters not listed on. MLAs configurations.

Raw sensor data are basically composed of 3-axes values and a nanosecond timestamp indicating the instant the sample was collected. However, we do not send raw sensor data to classifiers. Instead, we group sensor time series samples in one-second length frames to compose a sliding time window which is later summarized to originate an attribute vector. As time passes, the window is slided in 1 frame increments over the temporal series as depicted in. We consider f 0 as the frame of the current second, f −1 as the frame of the previous second, and so forth down to f −( nf − 1), where nf (the fifth element of the evaluation assembly) is the number of frames that compose the sliding time window. We used the following nf values in this evaluation: 4, 5, 6, 7, and 8. These values were defined experimentally so that the sliding window can accommodate the length of collected driving events which range from 2 to 7 seconds depending on how aggressive the event is.

Time window composed of nf one-second frames which group raw sensor data samples. In this work, we assessed the performance of several evaluation assemblies to find the ones that best detect each driving event type. The number of assemblies is the result of all combinations of 15 data sets, 5 different values for nf, 4 configurations of the BN algorithm, 5 of the MLP, 6 of the RF, and 36 of the SVM. This results in a total of 15.

5. 4 + 15. 5. 5 + 15. 5. 6 + 15.

Neil M White

5. 36 = 3825 evaluation assemblies.

4.2 Attribute vector An attribute vector is the summarization of the sliding window depicted in. One instance of the attribute vector is generated for every time window that contains a driving event on it.

Correspondingly, if there is no driving event for a particular time window, no attribute vector instance is generated. We create an instance of the attribute vector by calculating the mean ( M), median ( MD), standard deviation ( SD), and the increase/decrease tendency ( T) over sensor data samples in the frames composing the time window. The number of attributes in the vector is dependent on the number of frames in the sliding window ( nf). There are nf mean, median, and standard deviation attributes, and nf − 1 tendency attributes. Depicts the structure of an attribute vector for a single axis of sensor data. When the data set is composed of more than one axis, the attribute vectors for each axis are simply concatenated and only the class label attribute of the last vector is preserved as they all have the same value. (4) Where i = 0.( nf − 1), SF( f j) is a summarizing function (mean, median, or standard deviation) applied over the samples of the j th frame, and SF( f j, f k) is a summarizing function applied over the samples from the j th to the k th frame ( j.

4.3 Data collection in a real-world experiment We performed a real-world experiment in order to collect sensor data for driving events. In this experiment, an Android application recorded smartphone sensor data while a driver executed particular driving events. We also recorded the start and end timestamps of the driving events to generate the ground-truth for the experiment.

We performed the experiment in 4 car trips of approximately 13 minutes each in average. The experiment conditions were the following: (i) the vehicle was a 2011 Honda Civic; (ii) the smartphone was a Motorola XT1058 with Android version 5.1; (iii) the smartphone was fixed on the car’s windshield by means of a car mount, and was neither moved nor operated while collecting sensor data; (iv) the motion sensors sampling rate varied between 50 and 100 Hz, depending on the sensor; (v) two drivers with more than 15 years of driving experience executed the driving events; and (vi) the weather was sunny and the roads were dry and paved with asphalt. The driving events types we collected in this experiment were based on the events in. Our purpose was to establish a set of driving events that represents usual real-world events such as breaking, acceleration, turning, and lane changes. Shows the 7 driving events types we used in this work and their number of collected samples. Shows sensor data for an aggressive left lane change event as it is captured by the four sensors used in this evaluation. 5 Results We executed all combinations of the 4 MLAs and their configurations described on over the 15 data sets described in Section 4.3 using 5 different nf values.

We trained, tested, and assessed every evaluation assembly with 15 different random seeds. Finally, we calculated the mean AUC for these executions, grouped them by driving event type, and ranked the 5 best performing assemblies in the boxplot displayed in.

John Turner Poker

This figure shows the driving events on the left-hand side and the 5 best evaluation assemblies for each event on the right-hand side, with the best ones at the bottom. The assembly text identification in encodes, in this order: (i) the nf value; (ii) the sensor and its axis (if there is no axis indication, then all sensor axes are used); and (iii) the MLA and its configuration identifier. Top 5 best AUC assemblies grouped by driving event type as the result of 15 MLA train/test executions with different random seeds. In light of the these results, we can draw a few conclusions in the context of the performed experiment. Firstly, MLAs perform better with higher nf values (i.e., bigger sliding window sizes). Of the 35 best performing assemblies, 23 have nf = 8, 6 have nf = 7, 5 have nf = 6, and only 1 has nf = 4. Secondly, the gyroscope and the accelerometer are the most suitable sensors to detect the driving events in this experiment.

On the other hand, the magnetometer alone is not suitable for event detection as none of the 35 best assemblies use that sensor. Also, using all sensor axes performs better in a general way than using a single axis. The only exception being the z axis of the gyroscope that alone best detects aggressive left turns. Thirdly, RF is by far the best performing MLA with 28 out of 35 best assemblies.

The second best is MLP with 7 best results. RF dominates the top 5 performances for nonaggressive events, and aggressive turns, breaking, and acceleration. However, MLP is better at aggressive lane changes. BN and SVM were not ranked in the best 35 performing assemblies. Fourthly, MLP configuration #1 was the best performing. In this configuration, the number of neurons in the hidden layer is defined as (# attr. + # classes)/2.

John

This is also the default WEKA configuration. For RF, configurations #6 (# of iterations = 200; # of attributes to randomly investigate = 15), and #5 (# of iterations = 200; # of attributes to randomly investigate = 10) gave the best results. Finally, we found a satisfactory and equivalent performance in the top 35 ranked evaluation assemblies. This is true because the difference between the worst AUC mean (0.980 for the aggressive breaking event) and the best one (0.999 for the aggressive right lane change event) is only 0.018. A difference that is not significant in the context of this experiment.