Received 23 April 2016; accepted 15 May 2016; published 30 June 2016
VoIP services can be obtained on any network such as the Internet, intranets and local networks with Internet Protocol (IP) routing protocol in which the digitized voice packets are forwarded to the destination  . Several analyses were made to compare the performance of VoIP network and VoIP over Multiprotocol Label Switching Network (MPLS) network and it has been proved that the overall performance of voice transmission is improved with respect to VoIP over MPLS network   .
The Internet applications need specific service requirements for different applications and the network must be able to provide the required QoS guarantees. Most often, the QoS requirements of real time applications are bandwidth, delay and packet loss. Multiprotocol Label Switching is a widely adopted standard in the Internet and is used as a framework through which existing and future QoS approaches can be implemented. MPLS features a simple and effective packet forwarding mechanism that overlays virtual path capability of a connection oriented network over connection-less IP networks to carry voice, data and video traffic with different service level performances. However, MPLS traffic engineering is not a complete solution to providing Quality of Service. VoIP traffic and its characteristics should be identified in order to compute label switched paths that satisfy multiple traffic constraints to guarantee the required QoS  .
In this work, a new Architecture of VoIP Applications in Multiprotocol Label Switching Networks is proposed that classifies the Internet traffic flows based on their flow arrival rate, packet loss rate and delay. Using exponential double averaging method, flow arrival rate is estimated. Packet loss rate is estimated using active measurement probing technique. Based on the classification, traffic flows are routed into multiple parallel paths that enhance the available bandwidth utilization and evade congestion.
2. Existing QoS Mechanisms
Several approaches for VoIP applications are presented to analyze the QoS approaches present in IP and MPLS network. The methodologies used for load balancing and link utilization proposed by various researchers in the field of MPLS networks are studied to evaluate the performance of the proposed system.
2.1. VoIP Detection
Fauzia Idrees et al.  have proposed a technique for VoIP detection by analyzing the currently existing voice applications like Skype, Google Talk, Yahoo voice, MSN voice. The packets from different VoIP services are compared with the non VoIP packets like E-mail, file downloading, file sharing, instant messaging, games and video. They investigated packet-inter arrival time, average packet size, rate of packet exchange and packet exchange sequences to identify VoIP traffic.
From the observations, they conclude that packet size and average number of packets received per second for VoIP applications are noticeable factors when compared to non VoIP applications. The number of voice packets received per second is between 20 and 40. For other applications it is between 2 and 10. The average packet size is between 100 and 250 bytes whereas other applications packet size is more than 400 bytes. Hence the VoIP traffic detection algorithm first filters the User Datagram Protocol (UDP) packets and depending on whether the packet count lies between 20 and 40 per second and identifies them as VoIP traffic.
A multi service differentiation model is proposed by defining three types of paths to be traversed by traffic flows. They classify the flow and assign the path, based on the threshold values. A standard path defines the shortest route available between a pair of source and destination nodes and allocated for non priority flows. Alternative path generally corresponds to a path which is longer than the standard paths and allocated for non priority flows. Null path defines a route traversed by non-priority data flows in which packets can be dropped.
The resources at each node are monitored by assessing the Differentiated Services (DiffServ) queue lengths. When the queue length reaches deviation threshold it acts as a pre-congestion alarm and all the previously assigned paths are maintained and new flows are assigned to alternate paths. The critical threshold triggers the packet dropping process and yields a new set of null paths which will route demoted non-priority data flow. The standard threshold indicates that a steady traffic load situation is reached and paths are available again to all new incoming traffic flows. Their results show that multi service architecture is adopted by modifying or extending some functionalities of the data and control plane in an easier way as mentioned above  .
2.2. QoS Mechanisms
Jose M. F. Craveirinha et al.  have proposed a traffic splitting approach in MPLS networks. With this approach, a given node-to-node traffic flow is divided into two disjoint paths based on the load balancing cost and available link bandwidth. Load balancing cost is measured as bandwidth occupied in the paths. Least utilized paths (LSPs) are estimated using path ranking algorithm with minimum number of links in the path and load balancing cost as criteria. If there are alternative optimal paths, dominance test is used to select the appropriate path based on the threshold values. These thresholds are used to define regions in the load balancing cost function space with different priority requirements, which enable the ordering of the paths.
A QoS mechanism for flow based routers is proposed with two separate queuing mechanisms such as real time clock fair queuing and adaptive flow random early drop are developed to handle the incoming traffic. The Adaptive Flow Random Early Drop algorithm (AFRED) is an extension of random early drop queuing with dropping probability based Dual Metrics Fair Queuing algorithm (DMFQ). The guaranteed traffic is carried with AFRED queuing by identifying the flow separately. To identify the flow, the hash value for each packet is generated that defines the flow state information for the flow of that packet. This state of a packet is associated dynamically when a packet arrives into the system. The hashing functions such as XOR and CRC32 are used to generate hash values. But a well-defined hash function is necessary to avoid collisions  .
Bosco A. et al.  proposes bandwidth engineering (BE) approach to improve the MPLS control plane functionalities and utilizes the management plane functionality of Akyildiz I. F. et al.  . Bandwidth engineering (BE) is one of the Traffic Engineering (TE) mechanism proposed to assure QoS to high priority paths. The bandwidth engineering (BE) mechanism can assure with the guaranteed bandwidth for high priority flow even during the heavy load. The BE aims TE mechanism to work in off-line and on-line mode. The off-line routing is implemented with the global path-provisioning module to computer LSP for the given traffic flow. For high priority traffic flow, the path is estimated in the off line mode and can’t be changed. When the load increased, the bandwidth is increased without delay, by setting up a local admission control element. To offer the bandwidth attribute of a high priority flow, it verifies the availability of bandwidth from the other LSPs or even from the low priority flows and allocates to the high priority flows. For low priority flow, the LSP is estimated in off-line mode but it can be changed dynamically when the network load changed. Even the low priority flow is suspended.
Elwalid A. et al.  have proposed a Multipath Adaptive Traffic Engineering mechanism (MATE) in MPLS networks. Its main goal is to avoid network congestion by adaptively balancing the load among multiple paths. MATE has the traffic filtering and distribution function. It filters the traffic and distributes the traffic equally among N bins using round robin algorithm. Bins are used to store minimum amount of traffic that can be shifted to the LSP, with each bin mapped to on LSP. To filter the incoming traffic, MATE uses a hash function that is based on a Cyclic Redundancy Check (CRC). Hash function is applied to source and destination address field of IP header. Then the traffic is distributed by applying a modulo-N operation on the hash space. Traffic from the N bins is mapped to the corresponding LSP. The load balancing algorithm tries to equalize the congestion measures among these LSPs. It is found that the MATE load balancing concentrates on flows which have short-term traffic rate fluctuations. But MATE is applicable only when the flow does not require bandwidth reservation or should be the best effort traffic flows.
Weiqiang Sun et al.  have analysed the dynamic provisioning performance of label switched path in Generalized Multiprotocol Label Switching (GMPLS) networks. It is necessary to measure and characterize the gap between the network provisioning performance and application needs to improve the QoS. GMPLS is defined with a set of control protocols defined by Internet Engineering Task Force (IETF) to control various network devices such as routers, Time-Division Multiplexing (TDM) etc. GMPLS provides connection establishment based on the demand. It has the automated components necessary to make signaling and routing functions. The processing capability of the control plane is determined under different traffic load. Since the control plane is responsible for optimal LSP estimation and setup. The performance is measured using the three metrics such as unidirectional LSP setup delay, bidirectional LSP setup delay and LSP graceful release delay. The experimental results show that LSP setup delay exhibits high variance, in high traffic load and the number of hops along the LSP is large. More powerful hardware platforms may increase the control plane LSP provisioning performance but may not necessarily be available all the time.
3. MPLS Based VoIP Packet Dispersion (MPVoIP) Architecture
In order to improve QoS for VoIP applications, an effective technique for load balancing with optimal link utilization (MPVoIP) is proposed to forward the VoIP packets into the specifically selected and guaranteed QoS multiple paths in contrast to the traditional single path approach.
The proposed load balancing technique mainly deals with the flow classification and implements the load adapter  . Flow classification algorithm resides in the data plane that classifies Internet traffic flows as VoIP flow and Normal data flow. FEC formulation component assigns a label field for each classified packet. Traffic prioritization and bandwidth management algorithm reside in the management plane that monitors and performs traffic shaping to ensure QoS at each node. Load adapter resides in the control plane that evaluates the network load balancing state to make the packet forwarding decision. The steps involved in the proposed work are depicted in the block diagram shown in Figure 1.
A multipath routing policy is adopted based on network load condition with traffic engineering constraints such that packets traversing the network are experienced with minimum delay and maximum bandwidth. Control plane calculates the number of paths that can be provided for each incoming traffic flow according to its specific QoS requirements and current network topology. Then, the incoming traffic is divided into these paths based on the available bandwidth and delay experienced by each path. MPLS default classifier agent is modified to identify the VoIP and Data flow separately. When the flow enters into the MPLS edge node, MPLS packet classifier identifies the label field, if the label is not assigned then the control is passed to the modified classifier agent. Flow classification algorithm considers the threshold value for packet loss rate, delay and flow arrival rate. Threshold values are compared with the estimated values to identify the flow, and the packets are added into the corresponding queue for scheduling. The block diagram of Flow Identification and Classification is shown in Figure 2.
Figure 1. MPVoIP architecture.
Figure 2. Block diagram of flow classification.
International Telecommunication Union (ITU) standard G.711 voice codec produces 50 packets per sec with payload size of 160 bytes, packetization period of 20 ms and sample interval of 10 ms. The VoIP packet format is shown in Figure 3.
The packet is encapsulated by appending 18 bytes of Ethernet (link layer) header, 20 bytes of IP header, 8 bytes of User Datagram Protocol (UDP) header and 12 bytes of Real time Transport Protocol (RTP) header with the original voice payload. Hence the VoIP packet size is estimated as 218 bytes. The current Internet voice conversations like skype, Google Talk, Yahoo voice and MSN VoIP have 25, 21, 28, 36 packets per second respectively  . The maximum number of packets for G.711 codec is 50 and requires (218 × 50 × 8 = 87200 bits per second) 87.2 Kbps bandwidth. The current VoIP transmits or receives more than 15 packets per second (218 × 15 × 8 = 26100 bits per second). A threshold value of flow arrival rate (T3) is taken as the range from 26.16 kbps to 87.2 Kbps and threshold value of delay T2 is greater than the packetization period (25 ms). The pseudo code for the Flow Classification algorithm is given as follows.
Pseudo Code for Flow Classification Algorithm
INPUT: Packet loss rate PLr, Delay Dl, Flow arrival rate FRi, Threshold value T1 = 0.01(PLr), Threshold value T2 = 25 ms, Threshold value 26.16 kbps < T3 > 387.2 Kbps
For a given set of network paths from source to destination P1, P2, P3… Pi ?P
For (every node h in current path P)
For (every incoming Internet Flow)
If (packet loss rate > T1 && delay > T2)
If (If (Flow arrival rate > minimum threshold value of T3 &&
Flow arrival rate < Maximum threshold value of T3)
Step 6.1: Flow marked as “VoIP Flow”
Step 7.1: Flow marked as “Normal Data Flow”
Marked VoIP flow, Data flow
4. Enhanced MPLS Based Multipath VoIP Packet Dispersion (EMPVOIP)
Further Flow classification algorithm separates the unresponsive VoIP packets from the VoIP flow and these
Figure 3. VoIP packet format.
packets are prioritized to manage the available bandwidth and to evade congestion. The flowchart for the unresponsive voice transmission technique is provided in Figure 4. Flow arrival rate for each incoming flow is estimated using exponential double averaging method. Active measurement of packet loss rate is introduced by generating query request messages and response messages. Priority is assigned for each unresponsive voice packet using linear encoding. The dropping probability of the unresponsive voice packet is estimated and added with the label. During admission control, the probability assignment is used to admit the flow in the alternate paths.
The unresponsive packets are routed through least loaded multiple label switched paths. If optimal LSPs with sufficient resources are not available then some of the packets from the low priority unresponsive flow are dropped with probability to balance the load and preserve the high priority VoIP flows. Then the next available LSPs are selected to route these unresponsive VoIP flows. The packet drop probability function is also used as traffic shaping metrics. Sudden increase in unresponsive voice packets may increase the system load. Therefore the packets from the low priority unresponsive flow are to be dropped to balance the load.
4.1. Unresponsive Flow Identification
Internet traffic measurement can be applied on different protocol layers, especially on the network layer for ease of classification of individual packets into flows by assigning FEC. Traffic summaries and statistics such as packet counts are used for analysing the network flows. A flow can be described as a sequence of packets exchanged between two nodes. To analyze the flow characteristics, relevant information about the specific flow is stored rather than storing information about each packet.
This per flow state information is used to find congestion free optimal path selection. Unresponsive flow identification module operates in the network layer that analyses the packet headers and payloads of each packet to estimate the flow arrival rate. The flow arrival rate estimation proposed by Cao et al.  is extended with exponential double averaging method. Let and be the arrival time and length of k-th packet of flow i respectively. Flow rate is measured for every incoming k-th packet. The flow rate FRi is estimated using exponential double averaging method as given in the Equation (1) and (2).
Upon receiving the congestion notification, the unresponsive flows will not reduce their sending speed due their nature. Therefore dropping probability is applied to unresponsive voice packets.
Whereas, and K is a constant (0 < K < 1), is an exponential weight that gives con-
sistent value for burst traffic without considering inter arrival time differences. With the value of K at close to zero, greater the smoothing effect can be achieved with respect to arrival rate of the packet.
4.2. Packet Loss Rate Estimation
Unlike passive measurement, active measurement of packet loss rate does not require capturing traffic at a specific location. This involves injection of some sample test packets to certain network nodes. Active measurement technique is used in which ingress router generates a query message for a sample time interval. Figure 5 shows example query message transmitted via an LSP. Packet loss rate is estimated by sending a continuous query message at regular time interval of 100 ms along the label switched path P from the ingress router to the egress router. Initially, a set of sample test packets are sent from ingress router to the egress router on particular label switched path P. Later, ingress router sends a query message indicating the number of packets transmitted
Figure 4. Flow chart for unresponsive voice transmission technique.
Figure 5. Example query message.
at time t1along path P  .
The egress router receives the request query message and sends a reply message with the count of number of packets received along path P at time t2. The number of the lost packets is the difference between received packets and transmitted packets. Packet loss rate at various time intervals is calculated as the sum of difference between number of test packets received at the egress node and number of test packets transmitted at the ingress node.
Packet loss rate is estimated as given in Equation (3).
The estimated flow arrival rate FRi and the packet loss rate are used to identify the unresponsive VoIP flows. The threshold values T1, T2 and T3 are introduced in MPVOIP. Threshold value of T4 defines maximum unresponsive flow arrival rate. If the packet loss rate exceeds the threshold T1 (packet loss rate of 1%), flow arrival rate FRi exceeds minimum threshold value T4 (87.2 Kbps < T4 <176 Kbps), then the flow is marked as unresponsive VoIP flow.
4.3. Policy Based Packet Dropping
Load adapter estimates the average buffer occupancy value (Bavg) to evaluate the network load  . The average buffer occupancy value is used to estimate the probability Pli for unresponsive VoIP flow. Like in the Random Early Drop Queuing algorithm, the packet loss probability depends on the average buffer size (Bs). The probability of a packet is estimated when the packet is enqueued in the buffer for dispersion with the assigned priority. Let the packet count (Pk), the number of packets enqueued for the flow for each unresponsive flow is known. The probability constant Pc is estimated as given in the Equation (4). The probability Pli is estimated as given in Equation (5).
Probability constant (4)
where Q is the threshold value represents maximum queue size of 75% of the total buffer capacity (Bs).
The unresponsive flow is routed through the least loaded congestion free multiple paths to balance the network load. The task of selecting multiple paths for packet forwarding mechanisms is dependent on the set of routing constraints. A prior estimation of the bandwidth, delay ensures QoS guaranteed path selection. To achieve load balancing, the incoming traffic is divided into traffic split ratio (αi) along the selected multiple paths on a per packet basis by adding an identifier. Traffic split identifier can be inserted in the TTL filed of the MPLS header as proposed by Avallone et al.  . Figure 6 shows the utilization of TTL filed to insert the traffic split identifier.
Link failure or system failure takes a considerably long time to update its routing table. This leads to increase in congestion and packet losses. Let the core router monitors the congestion and alerts the nodes periodically. During the occurrences of congestion, this priority marking and policy dropping shapes the flow by dropping the packets from the low priority unresponsive flow to reduce congestion. Since the priority is assigned for all the unresponsive VoIP flow at the ingress router, it reduces the complexity of traffic shaping in core router.
5. Performance Analysis and Simulation Results
The simulation network for MPLS based Multipath VoIP Packet Dispersion is created with 100 nodes using
Figure 6. Modified label format.
mesh topology. All links are set up as full duplex link with 10 ms to 15 ms delay. Two routers are enabled as IP router that forwards the incoming traffic from IP network to MPLS network and from MPLS to IP network. The Label Edge Router is set using Core Stateless Fair Queuing (CSFQ). The Label Switch Router is set using drop tail queuing. The links bandwidth is set as 6 Mbps to 20 Mbps. Each link is modeled with a processing rate equal to the link bandwidth. The packet size is 218 bytes and a total of 30 flows are initiated.
All MPLS nodes in the simulation scenario use periodic round robin Packet scheduler. A mixture of TCP and VoIP flows are used for simulation. The simulation parameters for the MPLS based VoIP Packet Dispersion simulation network is shown in Table 1.
Simulation topology is similar to the IP topology with only difference being that nodes 1 through 98 are MPLS capable, which allow non-shortest path links to be used for multipath routing. Node 0 and 99 are IP router and distribute the link state information to MPLS edge router. The edge router classifies the flow and distributes the VoIP flows using multipath routing. Other normal flows are transmitted using single path routing. Error model is introduced to emulate real packet loss in the link between two nodes with exponential loss rate of 0.05.
The performance metrics, namely throughput, delay, packet loss rate (PLR) and link utilization, are measured for three different types of packet dispersion models such as Enhanced MPLS Based Multipath VoIP Packet Dispersion (EMPVoIP), MPLS based VoIP Packet Dispersion (MPVoIP) and shortest Path routing (SP). Figure 7 shows comparison of the packet Loss rate for different VoIP dispersions. Increase in number of packets leads
Table 1. Simulation parameters for MPVoIP and SP.
Figure 7. PLR comparison.
to more packet losses in single path transmission. When compared with all dispersion proposed EMPVoIP architecture ensures less packet losses. When 1250 packets are transmitted, the packet loss rate in EMPVoIP is 0.41 percentages, compared to 0.64 percentages in MPVoIP.
It is noticed that packet loss rate is reduced by controlling the unresponsive VoIP flows. It is observed that average packet loss rate is 0.58 percentage less in EMPVoIP, compared with MPVoIP and 1.14 percentage less, compared with SP. Figure 8 shows the comparison of the delay for different VoIP dispersion.
If the total number packets are considered as 500 and the delay measured is 0.0039 seconds in EMPVoIP and 0.0056 seconds in MPVoIP, compared to 0.0074 in SP. Simulation results in Figure 8 show that the average delay in EMPVoIP is 0.032 seconds less in EMPVoIP, compared with MPVoIP and 0.079 seconds less, compared to SP. Figure 9 shows the comparison of the throughput for different VoIP dispersion.
It is inferred from graph that the throughput is increased to a peak value of 14.258 Mbps in EMPVoIP. It is clear from graph that throughput is increased by 20.742 Mbps in EMPVoIP, compared with SP. Figure 10 shows the comparison of the link utilization for different VoIP dispersion. When there are 16 efficient links are available for routing, EMPVoIP utilizes 15 links and MPVoIP utilizes 14links, compared to 9 in a single path routing. Simulation results show that average link utilization is increased by 50 percent in EMPVoIP when compared with conventional single path routing.
It is observed from the simulation experiments that the performance of EMVoIP is improved for throughput, delay and packet loss rate parameters even when the network load is heavily loaded. When the network load is
Figure 8. Delay comparison.
Figure 9. Throughput comparison.
Figure 10. Comparison of the link utilization.
varied from 250 to 2000 packets, the average packet loss rate is reduced by 0.58 percentages in EMPVoIP, in comparison with MPVoIP and 1.14 percentages is reduced, compared with SP. It is noticed that packet loss rate is reduced by controlling the unresponsive VoIP flows in EMPVoIP, compared with all dispersions. The average delay in EMPVoIP is reduced 32 milliseconds less in EMPVoIP, compared with MPVoIP and 79 milliseconds seconds less, compared to SP.
Proposed Performance Enhancement Architecture of VoIP Applications identifies the normal data flow and VoIP flow separately from the incoming traffic flows by measuring Flow arrival rate, packet loss rate and delay and queues up for routing with MPLS label identifier. Quality of Service for VoIP flows are improved by routing the VoIP flow through multiple paths which satisfy the given input constraints. The VoIP flow whose arrival rate and packet loss rate are higher than the specific threshold value is classified as unresponsive and low priority is set up for those packets. Results from the simulation experiment show that the QoS performance of the proposed architecture is increased in terms of packet loss rate, delay and throughput. It is also noted that the link utilization and load balancing is improved, when the incoming traffic is routed along multiple LSPs. Thus, the effectiveness of the system is assessed by simulating the system with Network Simulator for link utilization, packet loss, delay and throughput parameters and shown that the performance of the system architecture is significantly improved. To study the performance of VoIP applications, the other important metrics such as playout time and jitter parameters may be considered. Further research work may focus on developing a novel buffer management scheme to minimize the buffering time and packet loss. Resource allocation and management can be taken up for further research to perform better load balancing results.