I. ABSTRACT
Driver assistance systems that monitor driver intent, warn drivers of lane departures, or assist in vehicle guidance are all being actively considered. It is therefore important to take a critical look at key aspects of these systems, one of which being lane position tracking. It is for these driver assistance objectives that motivate the development of the novel “Video Based Lane Estimation and Tracking” (VioLET) system. The system is designed using steerable filters for robust and accurate lane marking detection. Steerable filters provide an efficient method for detecting circular reflector markings, solid-line markings, and segmented-line markings under varying lighting and road conditions. They help to provide robustness to complex shadowing, lighting changes from overpasses and tunnels, and road surface variations. They are efficient for lane marking extraction because by computing only three separable convolutions we can extract a wide variety of lane markings. Curvature detection is made more robust by incorporating both visual cues (lane markings and lane texture) and vehicle state information. The experiment design and evaluation of the VioLET system is shown using multiple quantitative metrics over a wide variety of test conditions on a large test path using a unique instrumented vehicle. We also present a justification for our choice of metrics based on our work with human factors applications as well as extensive ground-truthed testing from different times of day, road conditions, weather, and driving scenarios. In order to design the VioLET system, we first performed an up-to-date and comprehensive analysis of the current state-of-the-art in lane detection research. In doing so, we present a comparison of a wide variety of methods, pointing out the similarities and differences between methods as well as when and where various methods are most useful.
II. INTRODUCTION
Within the last few years, research into intelligent vehicles has expanded into applications which work with or for the human user. Human factors research is merging with intelligent vehicle technology to create a new generation of driver assistance systems that go beyond automated control systems by attempting to work in harmony with a human operator.
Lane position determination is an important component of these new applications. Systems which monitor the driver’s state, predict driver intent, warn drivers of lane departures, and/or assist in vehicle guidance are all emerging. With such a wide variety of system objectives, it is important that we examine how lane position is detected and measure performance with relevant metrics in a variety of environmental conditions.
There are three major objectives of this paper. The first is to present a framework for comparative discussion and development of lane detection and position estimation algorithms. The second is to present the novel “Video Based Lane Estimation and Tracking” (VioLET) system designed for driver assistance. The third is to present a detailed evaluation of the VioLET system by performing an extensive set of experiments using an instrumented vehicle test bed.
The contributions of this research extend to five areas:
1) The introduction of fully integrated lane estimation and tracking system with specific applicability to driver assistance objectives. By working closely with human factors groups to determine their needs for lane detection and tracking we developed a lane tracking system for objectives such as driver intent inferencing and behavioral analysis.
2) The introduction of steerable filters for robust and accurate lane marking extraction. Steerable filters provide an efficient method for detecting circular reflector markings, solid-line markings, and segmented-line markings under varying lighting and road conditions. They help to provide robustness to complex shadowing, lighting changes from overpasses and tunnels, and road surface variations. Steerable filters are efficient for lane marking extraction because by computing only three separable convolutions we can extract a wide variety of lane markings.
3) The incorporation of visual cues (lane markings and lane texture) and vehicle state information to help generate robust estimates of lane curvature. By using the vehicle state information to detect instantaneous road curvature, we can detect curvature in situations where roadway lookahead is limited.
4) The experiment design and evaluation of the VioLET system. This experimentation was performed using multiple quantitative metrics over a wide variety of test conditions on a large test path using a unique instrumented vehicle. We also present a justification for our choice of metrics based on our work with human factors applications as well as extensive ground-truthed testing from different times of day, road conditions, weather, and driving scenarios.
5) The presentation of an up-to-date and comprehensive analysis of the current state-of-the-art in lane detection research. We present a comparison of a wide variety of methods, pointing out the similarities and differences between methods as well as for what objectives and environmental conditions various methods are most useful.
A. System Objectives
In this paper we will look at three main objectives of lane position detection algorithms. These three objectives and their distinguishing characteristics are:
1. Lane Departure Warning Systems

For a lane departure warning system, it is important to accurately predict the trajectory of the vehicle with respect to the lane boundary.
2. Driver Attention Monitoring systems

For a driver attention monitoring system, it is important to monitor the driver’s attentiveness to the lane keeping task. Measures such as the smoothness of the lane following are important for such monitoring tasks.
3. Automated Vehicle Control Systems

For a vehicle control system, it might be required that the lateral position error at a specific lookahead distance, be bounded so that the vehicle is not in danger of colliding with any objects.
B. Environmental Variability
In addition to the system objective in which the lane position detection will be used, it is important to evaluate the type of environmental variations that are expected to be encountered.
C. Sensing Modalities
Various sensors have been studied to perform lane position determination. Examples of these include:
· Camera and vision sensors
· Internal vehicle state sensors
· Line sensors
· LASER RADAR sensors
· Global positioning system (GPS) sensors
While LASER RADAR sensors, line sensors, and GPS sensor can perform extremely well in certain situations, vision sensors can be utilized to perform well in a wide variety of situations. LASER RADAR sensors are useful in rural areas for helping to resolve road boundaries, but fail on multi-lane roads without the aid of vision data. Line sensors, while accurate for current lateral position, have no look-ahead and cannot be used well for trajectory forecasting, which is needed to compute metrics such as time to lane crossing (TLC).GPS, especially differential GPS, can provide accurate position resolution, but this requires infrastructure improvements to achieve these accuracies and rely on map data which may be outdated and inaccurate. Vision sensors can provide accurate position information without the need for external infrastructure or relying on previously collected map data. In the situations where vision sensors do not perform well (i.e. extreme weather conditions or off-road conditions), the vision data can be fused with other sensor modalities to provide better estimates. This makes vision sensors a good base on which to build a robust lane position sensing system. Because of these reasons, this article will focus mainly on vision sensors augmented by vehicle state information obtained from the in-vehicle sensors.
IV. COMPARISON OF LANE POSITION DETECTION AND TRACKING SYSTEMS
In this section we will take a look at the current state of the art in lane position detection and tracking as well as provide a critical comparison between algorithms. Broad surveys of intelligent vehicles have examined many of the lane position sensing algorithms. While these papers are useful for broad examinations of vision research for intelligent vehicles, they are limited in the detail they can provide on lane position sensing because of their broad nature. It is our intent to provide a more in-depth survey of the current methods for lane position sensing. In order to cover such a large expanse of research which has taken place in the last 15 to 20 years, we will group the algorithms discussed here into categories related to the contributions of the algorithms. Fig.1. shows a generalized flow chart for lane position detection systems combining multiple modalities an iterative detection/tracking loop and road and vehicle models.
V. THE VIDEO BASED LANE ESTIMATION AND TRACKING (VIOLET) SYSTEM FOR DRIVER ASSISTANCE
Breaking down the design into the sections illustrated in figure 1 helps to create a lane position detection and tracking system focused on one or more of the system objectives described in section II-A and capable of handling a variety of the environmental conditions explored in section II-B. By examining the system one piece at a time and understanding how that choice might affect overall system performance, we can optimize our system for our objective of driver assistance. Fig.2. System flow for VioLET, a driver assistance focused lane position estimation and tracking system
The primary objective of the VioLET system is driver assistance. This is a rather broad objective, so some clarification is necessary. It is our intention for the system to provide accurate lateral position over time for the purposes of lane departure warning and driver intent inferencing. The intended environment for the lateral position detection is daytime and nighttime highway driving under a variety of different roadway environments. These environments include shadowing and lighting changes, road surface texture changes, and road markings consisting of circular reflectors, segmented lines, and solid lines. The VioLET system follows a similar flow to the generic system flow described in section III. The system specific flowchart is diagramed in greater detail in Figure 2. In this section we will describe each of the system modules and the motivation behind their development.
A. Vehicle and Road Modeling
B. Road Feature Extraction
C. Road Curvature Estimation
D. Post processing and Outlier Removal
E. Position Tracking
VI. EXPERIMENTS AND PERFORMANCE EVALUATION
VII. CONCLUSION
VII. REFERENCES
►IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, DECEMBER 2004, REVISED JULY 2005
►IEEE Intelligent Vehicles Symposium,
►IEEE INTERNATIONAL WORKSHOP ON MACHINE VISION FOR INTELLIGENT VEHICLES IN CONJUNCTION WITH IEEE CVPR, JUNE 21, 2005
►www.wikipedia.org