Received 30 March 2016; accepted 16 May 2016; published 19 May 2016
The annual amount of manufactured cars has grown considerably in the last years, proven by data from the Automotive Manufacturers Association. According to studies on the production of vehicles, Argentina ranks as 38th for vehicles per capita (268 motor vehicles per 1000 habitants)  .
This increase reflected in the number of accidents, both as a fatal risk due to the high increase in traffic. In this way, Argentina is the country with the highest rate of deaths from traffic accidents in South America, where 5000 people die with a rate of 12.6 deaths per 100 thousand inhabitants. WHO (World Health Organization) recognized that there was a decrease in the last three years in the number of deaths in traffic, but the values of Argentina are still high  . The road accidents cause the death of 21 people daily. In addition to the deaths and injuries, material losses are estimated at 3000 million dollars a year.
The rate of traffic accidents remains constant between 1992 and 2015 (around 7500) and this value is high, compared to the rates of other countries in relation to its population and number of circulating vehicles  .
Trying to bring a solution to this problem, it may be interesting to analyze the conditions and main causes of accidents in the country. Analyzing the accident rates in rural areas, the accident rate is 48%, and meanwhile in urban areas is 52%  . In addition, often better-defined lanes are national roads (51%), provincial roads (25%), highways (9%) and avenues (7%); it is observed that where these statistics have worse outcomes  . In other aspects, analyzing in terms of lighting condition, the main number of accidents occurs in daylight (66.1%), and in the best atmospherically conditions (good time: 85.8%, rain: 10.1%, fog: 0.9%).
Regarding to human errors, the lane invasion is the main cause of traffic accidents (40%), followed by distractions (16%), and incorrect speed (10%).
Several studies demonstrate that the major accident rate is observed in urban areas, on national roads, in broad daylight and with good weather. Also, the main cause of accidents is given by invasions of lanes on national roads, highways and avenues, where the lanes are better defined.
One possible solution to the problem is through the incorporation of driver assistance systems. These systems are capable of enhancing driving quality and provide information of interest related to the vehicle, the road and the environment, in addition, passively alert risky situations, decreasing the likelihood of accidents, and can even act actively on vehicle control in high-risk scenarios. Some dangerous situations are caused by distractions, and the most common are: use of cell phone or electronic devices, GPS, drinks, changing the radio station, posters and other abnormal events. These situations are joined tiredness, microsleeps, and long and monotonous routes, common in Argentina. Using this type of system provides the driver more time to act, avoiding accidents in time.
Systems of driver assistance or I2DASW (Interactive Intelligent Driver-Assistance and Safety Warning)  provide functions such as maintaining speed and distance from the vehicle ahead, avoiding imminent risk of collision   . They are also able to assist the driver to change or stay in a lane, or park maneuvers; identify the speed limits, and can even see in the dark well in advance (night vision assistant  ). Its functions detect lanes  , static or moving obstacles   , traffic signals  , and improve visibility  , among others.
The I2DASW systems combine image sensors, LIDAR (Laser Imaging Detection and Ranging) and radar, and can be used in automatic control systems speed and distance ACC (Adaptive Cruise Control) and lane departure. These systems integrate the study of driver behavior and vehicle dynamics in a framework of threat risk situations. Also proposed algorithms for obstacle detection and tracking using multiple sensors including radar, LIDAR (Laser Imaging Detection and Ranging), and cameras.
The main functions of I2DASW systems are:
Provide real-time information about the vehicle, the driver and the traffic environment for better and safer driving.
Safety and assistance systems. The system warns the driver about possible risk situations depending on the current vehicle position, direction, speed and traffic situation; on the other hand, can take the vehicle control in a dangerous situations. The safety alerts the driver in a possible collision case, or in a lane change.
The system can protect drivers and passengers from the impact between human and vehicle bodies, for example, using smart airbag systems.
Several commercial systems exist, but they are expensive and cannot be configured or modified according to regional features.
This work presents a solution for improving the driving quality, through an I2DASW framework implementation. The framework determines risk situations based on the current conditions of vehicle and road. The risk situation is determined by a set of traffic rules that must be analyzed individually. When a dangerous situation is detected, the framework analyzes the severity degree and alert to driver. The system alerts to the driver passively through the user interface, as sound and visual alarms.
The framework is scalable, capable to incorporate new information from new sensors, actuators, as well as the introduction of new rules. Also, allowing the modifiability of the system with minimal impact, and interoperability with other systems (e.g. navigation system GIS). Experimentally, the framework was tested using a single camera, which provides information about lane and obstacle detection.
The Section 2 presents the architectural design of the framework, in Section 3 the results, and finally, the conclusions and future of this work.
The framework was developed with the intent to interact with a large number of sensors, such as, image sensor camera, infrared images or vehicle status information such as speed. These sensors can provide the framework different types of data and the design must be capable to process new information in the future.
The initial set of requirements that must be met for the framework is:
Increase driving quality.
Provide information of interest related to the vehicle, the road and the environment.
Alert passively risky situations.
Provide the distance from the vehicle ahead.
To assist the driver when a change in a lane.
Detect lanes and mobile or static obstacles, and determine if they are on the same lane driving.
Determine risk situations by a group of sensors.
Analyze the severity of a risky situation.
Display of information configurable.
The identified architectural components are:
Static information (database).
In the next subsections a short description of the detailed design for each layer is presented.
Figure 1. Diagram of layers of the framework software architecture.
2.1. Data Capture Layer
This layer is the entry point information for the framework, and is associated with physical sensors. Whenever a sensor gets a new raw data, this is sent to the next layer (Data Recognition Layer), specifically for the Collector Class. This contains the set of available sensors, and subscribe to events for these sensors. The data is received by a single method, as a parameter of Data Class type. The information from the sensors to the collector is sent asynchronous, through an events system, allowing receiving information from different sensors simultaneously, and avoiding bottlenecks for different information processing.
2.2. Data Recognition Layer
The data collector can receive data in several formats (image, GPS information, accelerometer, automotive computer) represented by the Data class and class inheritance. Each of these classes has specific information according to the type of data to be treated. The framework allows new data types, new child classes of the Data Class. A visitor design pattern is implemented and by the Observer-Observable, to know the concrete object was received by the collector. The receive data is transferred to the Data Processing Layer.
2.3. Data Processing Layer
This layer also contains the User Profile. The user profile contains the user display information on possible risks: alert sound, visual or vehicle control. The user can also enable or disable alerts.
The recognized objects, traffic rules and user profile, interact with the decision engine to select the best decision, active or passive, to apply. The decision engine subscribes to all events of recognizers. A recognizer can provide different types of information. The list of associated conditions is obtained when new relevant information is recognized. The obtained events are processed according to a set of traffic rules that assess risk. The traffic rules can be modified to incorporate or deleting rules, and are implemented in a local database.
This database contains three tables: Condition, Action and Action_Argument, used by the decision engine to determine whether driving is safe or not, and the action to take. The conditions associated with a single object can be grouped to validate if all are met, some of them or none, to determine the action. For example, when a road is detected, to evaluate different conditions, each one has its own set of corresponding actions:
(Road.type == straight) AND (CentralLine.color == yellow) AND
(CentralLine.type == double) AND (obstacle does not exist)
Several actions can be grouped, associated to an order to be executed sequentially if a condition is met. For the condition as described above, two actions must be executed in a respective order.
1. Show message.
2. Trigger sound alarm.
The Action_argument table allows customizing each action, adding parameters for each one. Following the example:
1. Display warning message titled “Attention” and message “not exceed”. This message should be displayed with Color “# FFFF00” (yellow) and the lane where we are circulating should be painted color “# 59FFFF00” (yellow transparent).
2. Play sound corresponding to value “not exceed”.
The implementation of traffic rules on a database allows customized both traffic rules, as the system responds. In this way, the framework can be modified with minimum impact, meeting the requirements.
2.4. Presentation Layer
The Presentation layer is responsible for the visualization on the User Interface, or takes the vehicle control. This layer presents to the user the conditions of the context, either through the screen, the vehicle when the framework decides to act directly by applying a brake, or by sound alerts.
The User Interface brings access to framework options, selecting notifications and on-screen displays, enabling I2DASW system, customizing the system.
The main screen was designed to achieve display as much information as possible without overwhelming the user, providing the ability to hide panels and customize the traffic rules, messages, colors and alerts (Figure 2).
The system performance was tested on a set of sample images obtained from an image bank of US Robotics Institute  . In addition, a number of pictures and videos taken on Argentine routes, mainly in Mendoza routes and the highway between Azul and Olavarria (national route 226), Buenos Aires, Argentina.
The regional images taken from inside a vehicle, using a camera located near the rearview mirror. In some cases, can be observed images in adverse situations like frontal sun, snow, and not well defined lanes.
The user interface indicates coloring the area of the driving lane the severity of risk, as follows:
n Green: no dangerous situation is detected (Figure 3(a))
n Yellow: not possible to change the lane (Figure 3(b))
n Red: possible risk of an obstacle in the via (Figure 3(c))
The lane change detection was assessed on 12,212 pre-catalogued images, and a percentage of detection of lane changes without false positives 81.25% was obtained. This value was obtained comparing the system result in front of the previously assigned value. Also, is the most successful, as it takes into account not only the detected and undetected lanes, but also incorrect changes or false positives.
Figure 2. Classic screen visualization of the system. Several indicators can be seen: left line of the lane (in blue), right line of the lane (red), midlines (yellow) and obstacle indicators (blue small rectangle).
(a) (b) (c)
Figure 3. Framework evaluation. (a) Normal driving, no risk at any lane; (b) Caution alert, by vehicle en the opposite lane; (c) Danger, overtaking the vehicle itself.
This work is focused on assisting drivers to reduce the number of traffic accidents adaptable to regional conditions. Even, some commercial systems exist but are not viable for regional situations, such as the animal presence on the roads, or not well painted lanes.
The designed architecture allows information from new sensors, new recognizers, and modifies the traffic rules, user configuration or environment actions (alarms, displays, vehicle control). This feature is the most relevant aggregate, which makes the system adaptable for new requirements. The proposed framework includes attributes of usability, flexibility, and modifiability. In addition, the performance attribute is meted, by the atomic messages transferred to upper layers. The capabilities of the architecture design include the integration with existents systems in the vehicle, such as navigation systems and on-board computers; or incorporate new systems such as rear camera information, traffic signal detection and blind spot detection.
The framework was evaluated incorporating lane and obstacle detection. The results show a good reliability of entire framework.
5. Future Work
As future work, we propose to instantiate the framework with new capabilities, including new recognizers based on image processing (traffic signal identification, pedestrian detection, bump detection, among others), as based on vehicle sensors (speed, accelerator, brakes, etc.). The integration with a GPS/GIS system is another desired capability, as the incorporation of active action to the vehicle.
In addition, the main interest of this work is the implementation of the entire system in an embedded system, particularly to a Xilinx Zynq.