Received 30 April 2016; accepted 25 May 2016; published 5 July 2016
Recent advances and development in the field of sensors and availability of economical hardware such as microphone and Complementary Metal Oxide Semiconductor (CMOS) camera allow the emerging of WMSNs. Wireless multimedia sensor network is a collection of a huge number of wireless sensors outfitted with devices which are capable of sensing and retrieving images, audio/video streams and scalar data  . And the importance of the traffic generated by these sensors is different; it is needed that preference must be given to packets carrying important data by providing more network resources than other packets to achieve diverse levels of service  . To provide different levels of service to traffic flows with different service requirements in terms of delay, throughput and packet loss, a priority can be assigned to each different traffic flow. The two kinds of data traffic supported by WMSNs are InElastic Real Time (IERT) and Elastic Non Real Time traffic (ENRT). ENRT can adapt to extensive changes in throughput and delay while IERT does not adjust with changes in throughput and delay. Hence IERT necessitates preferential treatment  . The major applications of WMSNs include multimedia surveillance, traffic control, health care, object locating system and process control in industries  .
WMSNs face problems due to resource-constrained nature of sensor nodes such as limited energy, processing capacity, bandwidth and small memory. In WMSNs applications, the tiny size of sensors, consumption of power due to heavy volume of traffic and complex computations lead to congestion which subsequently diminishes QOS of applications. Hence the main objective of WMSNs is to develop algorithms that prolong the lifetime of the network and also assures to provide QOS necessities insisted by the applications  .
The present study is based on the following considerations: a probability based congestion prediction; a mixture of three parameters to attain accurate determination of congestion; service discrimination among different kinds of data; an algorithm for rate adjustment based on probability based congestion prediction, priority of the source data and location of source sensors.
2. Related Work
Congestion in WSN may occur because of two reasons  : collision in the medium or overflow in the buffer. This will result in retransmissions and dropping of packets which leads to long delay, energy waste, high response time and low throughput. In WMSN to reduce the effect of congestion, an efficient method is needed to control congestion. Any congestion control technique involves 3 steps  : detection of congestion, notification about congestion and adjustment in data rate. In WMSN congestion can be alleviated either end-to-end or hop- by-hop. The hop-by-hop approach is better compared to end-to-end approach due to lower time and energy consumption. Various parameters that are useful for detecting congestion are: buffer occupancy, channel load, packet service time, packet inter arrival time and mixing of the above parameters  . The occurrence of congestion can be propagated to the rest of the nodes in the network either using implicit method  or in explicit way  . In the first method control packets carrying current state of congestion are propagated by congested nodes to other nodes in the network using broadcast nature of WSN. These additional packets will increase the network load further which leads to more congestion. In implicit method information about congestion is propagated to other nodes in the network using header portion of a data packet or using ACK packets with the help of piggybacking concept. This method does not add any additional load to the network. Congestion can be alleviated either by controlling the load known as traffic control  or by increasing the availability of resources known as resource control  otherwise, holes will be created in the network due to power exhaustion of nodes in the congested area. Compared to resource control method, traffic control method will effectively save the resources by adjusting the data rate of source node(s) and/or intermediate node(s) in the congested area.
Research community has carried out a lot of effort in developing solutions for congestion control in WSN and WMSNs. Congestion Detection and Avoidance (CODA) protocol  uses channel load or buffer occupancy as an index to detect congestion and controls transient congestion using open loop hop-by-hop backpressure mechanism and controls persistent congestion using closed loop end-to-end approach. Congestion Control and Fairness (CCF)  uses the incoming and outgoing rates for detecting the level of congestion. The difference of these two parameters is given as feedback for efficient utilization of link and to improve the transmission rate. The available bandwidth is fairly allocated among transit traffic flows. Priority-based Congestion Control Protocol (PCCP)   uses the ratio between packet service time and packets inter arrival time as an index to detect congestion. It proposes Priority based Rate Adjustment (PRA) to control congestion. Fuzzy Logic Controller with the Exponential Weight of Priority-Based Rate Control (FEWPBRC) algorithm  uses both input and output data rate for assessing the congestion level and it applies fuzzy logic to determine the output rate of sink node. This approach provides service discrimination among different categories of data types. Fusion  suggested three mechanisms to alleviate congestion. In the first mechanism hop-by-hop flow control, each child node stops its forward transmission once it overhears a packet with congestion indication from its parent. In case of persistent congestion backpressure message will reach the source and it is forced to adjust its rate. The second mechanism namely source limiting uses token bucket algorithm to regulate the traffic. The third mechanism prioritized MAC gives preference to nodes that experience congestion than non-congested nodes using randomized back off algorithm. Interference aware Fair Rate Control (IFRC) algorithm  uses average buffer occupancy as a parameter for detecting congestion and it controls congestion by altering the outgoing rate of each link using AIMD scheme. In Sink-to-Sensors Congestion Control (CONSISE) approach  each upstream node detects the current level of congestion based on the outgoing rate of its downstream nodes. Each downstream node is allowed to choose a suitable upstream node to forward packets. At the time of congestion, the unselected upstream nodes will reduce their outgoing rate.
CODA  , Fusion  and IFRC  use a static threshold value for detecting the inception of congestion. But finding such a static value in a dynamic environment is a complicated process. CODA  and Fusion  use the broadcast nature of WMSN to give feedback about congestion to neighboring nodes even though there is no guarantee for broadcasted messages. PCCP   , EWPBRC  and FEWPBRC  ignore buffer occupancy in detecting the inception of congestion that results in repeated buffer overflow. CCF  provides only equal fairness among nodes. But in applications where diverse sensors are deployed in different geographical locations purposely, to get dissimilar throughput dependent on priority, CCF cannot be used.
The literature review shows that the probability based approach that considers a combination of buffer occupancy, incoming and outgoing data rates for congestion prediction is not proposed so far, by any priority aware congestion control protocol. The present work uses a dynamic threshold value for predicting congestion compared to other existing works. It considers buffer occupancy as one of the parameter to predict congestion. It uses piggybacking concept to propagate feedback about congestion that assures guaranteed delivery of packets.
3. Probabilistic QOS Aware Congestion Control Approach
The present work is motivated from the limitations of EWPBRC  and Probability approach   . EWPBRC  detects congestion using input and output data rate alone. Since usage of buffer is not included as a parameter for detecting congestion, overflow of buffer happens frequently that leads to packet loss, throughput minimization and long delay in delivering data. Probability approach   makes equal rate adjustment for all the nodes including the source nodes. However in WMSN, sensor nodes may be deployed in various geographical locations based on their importance and hence they need service differentiation.
In the present work, a probability based approach that considers three parameters namely buffer occupancy, incoming and outgoing data rates is used for detecting congestion intensity. Moreover during rate adjustment, preferential treatment is given to sensor nodes based on the class of data generated and their geographical locations.
Figure 1 shows the structure of proposed congestion control unit. Similar to other congestion control approaches, this structure in each sensor node includes Congestion Detection Unit (CDU), Rate Modification Unit (RMU) and Congestion Notification Unit (CNU).
CDU unit is used to predict congestion in each sensor node. In the present work, CDU unit computes incoming and outgoing data rate and buffer occupancy of each sensor node to assess the congestion status using probabilistic method. The value of congestion status may be a negative or positive number. In each predetermined periodic time interval, the outgoing data rate of the children nodes is computed by each parent node in addition
Figure 1. Structure of congestion control unit.
to its own traffic. The priority of sensor nodes differ from one another since these may be fitted with a variety of sensors. Hence each parent node must consider the priority of its children in determining the outgoing data rate of the child nodes. In WMSN, sometimes sensor nodes are purposely deployed in different geographical locations based on their importance, so that sensor nodes may have dissimilar priorities. RAU unit determines the new outgoing data rate by considering the current congestion status, priority of source data and priority based on location of sensor node. The new outgoing data rate is forwarded to CNU unit to notify about the new rate to child nodes. For efficient utilization of network energy, in the present work, CNU unit uses implicit method for communicating notification about congestion. Upon receiving congestion notification each sensor node adjusts its current outgoing data rate according to the value of congestion status.
3.1. Probabilistic Congestion Prediction Approach (PCPA)
The PCPA predicts the congestion intensity in each sensor node with probabilistic approach using dynamic threshold index value on buffer capacity and buffer occupancy of that node. The value of threshold index varies from time to time based on incoming and outgoing data rates of sensor node and remaining space in the buffer.
At time “t + 1”for node “i”, buffer occupancy,
where and are the incoming and outgoing data rates for node “i” at time “t”.
At time for node “i”, threshold index is determined using Equation (2)
where is the maximum capacity of the buffer for node “i”.
Equation (2) is applicable when there is any dataflow in the node or if incoming data rate is higher than outgoing data rate. The threshold index, can be set to when incoming data rate is lower than outgoing data rate or is zero. The dynamic nature of threshold index is shown in Table 1 where 20 packets are assumed as buffer capacity.
The possibility for congestion at node “i” is determined by comparing the buffer occupancy with the threshold index. The results show that
1) if then there is no congestion in the node
2) if and indicates congestion due to buffer overflow.
Hence the probability of congestion occurrence for node “i” is given by
The dynamic variable is considered as congestion_status. The value of varies from time to time and it indicates the probability for occurrence of congestion at node “i” when the buffer contains “x” number of packets in excess to threshold index.
is the set of neighbour nodes that are in the coverage area of node “i”. The probability for reception of a packet by node “i” from neighbour node “j” is given by
Table 1. Instances of dynamic threshold index.
where, and is the probability for node “i” to absorb packets from neighbours and is the probability for dropping of packets.
The chance for a node to be in the vicinity of node “i” is Ci which is the coverage range of node “i” and it has a value of 1.
The probability for node “i” to have a neighbour set size of “φ” is determined by
The probability expected by node “i” to receive packets from its neighbour is
The intensity of congestion at a node is calculated from Equations (3) and (6) and it is propagated as feedback in backward direction to indicate the inception of congestion. The level of congestion is predicted by the value of as follows:
1) if No congestion
2) if Congestion occurs and level of congestion in the node can also be indicated.
The PCPA is denoted as (P1), for which the algorithm is given below:
Probability Based Congestion Prediction Algorithm (P1)
set ‘capbuf’ to maximum capacity of the buffer
set ‘m’ to the set of neighbour nodes
set value of 'C' to power level of the node
// calculation of threshold index
congestion_status =0 // no possibility for congestion
for (i = 1; i
; congestion_status = V; //level of congestion possibility
congestion_status = 1 //congestion
// propagate feedback about congestion to source
piggyback ACK with congestion_status
forward ACK to upstream nodes
This algorithm P1 is represented in terms of flow chart in Figure 2.
3.2. Rate Adjustment Based on Probabilistic Prediction
Each node in the network executes algorithm (P1) to determine the possibility of congestion. The is ap-
pended to the Acknowledgement (ACK) frame using piggybacking concept and it is propagated as feedback among upstream nodes. Hence impact of congestion is minimized in hop-by-hop manner and according to the values, each upstream node adjusts its outgoing data rate.
Figure 2. Flowchart of prediction algorithm (P1).
The QOS allows to sort out the entire network traffic into an InElastic Real Time (IERT) and Elastic Non Real Time traffic (ENRT). Further ENRT traffic is divided into High Priority ENRT (HPENRT), Medium Priority ENRT (MPENRT) and Low Priority ENRT (LPENRT) traffic. The relation among these traffic flows in terms of QOS parameters is given by
Figure 3 shows the network model used for simulation. It has one sink and twelve sensor nodes represented as (1 to 12) and depict a classic WMSN with sensors of different nature. This model supports a single path communication. Each sensor node in the model collects data from real time inelastic traffic flow class and three categories of non real time traffic flow classes namely IERT, HPENRT, MPENRT, and LPENRT.
Figure 4 shows the queuing style used in each node. Each traffic flow class has its own queue and to achieve discrimination among different traffic flow classes, each sensor node attaches a flow class label to every packet; therewith each incoming packet is appended to their respective queue.
The sensor node is assigned with two priorities: traffic flow class priority and location priority. It is assumed that each node has different source traffic flow classes. Let represents the source priority
of traffic flow class “j” in node “i”. This priority value can be manually set to provide different levels of service to different traffic flow classes. Traffic flow class with high priority is assigned with larger value compared to traffic flow class with low priority.
Figure 3. Diagrammatic representation of the network model.
Figure 4. Queuing style used in each sensor node.
For each node “i”, the value of traffic flow class priority is determined in Equation (8)
where “j” represents all categories of traffic flow classes such as IERT, HPENRT, MPENRT, and LPENRT.
Generally in WSN, sensor nodes are set with variety of sensors and are geographically located in different places based on their importance. Hence these sensors need preferential treatment to attain dissimilar throughput. To attain this fairness, each sensor node is assigned with location priority by the network administrator.
The local priority of each node “i” is given in Equation (9)
Considered D(i) as the child set of node “i”. Hence the overall priority of each node is given by
From Equation (10) it is known that when a node has no child, then the value of both overall priority and local priority are the same for that node. Moreover the overall priority is calculated against active
sources only. If the source is inactive, then apart from the type of traffic flow class, is assigned with a value of zero which ensures that the present work distributed the available resources only among active sources.
Probabilistic Prediction Based Rate Adjustment Algorithm (PPRAA)
The initialization of outgoing data rate is carried out as follows:
Consider is the service time taken by sink to service the current packet. The average service time is measured as the time which starts from sending of packet from network layer to MAC layer and ends with receiving message by network layer about successful transmission of the packet by MAC layer. Formulating with exponential weighted sum,
Average service time,
where is a constant,.
The output rate of sink node,
Substituting the value of Equation (12), the outgoing data rate for each child node “i” based on its own overall priority and overall priority of sink node is given in Equation (13).
where is the cumulative of overall priority of each child node of the sink node. Consider as the child set of sink node. The overall priority of sink node is calculated from Equation (14)
The steps are repeated to determine the initial outgoing data rate of each node in the network.
Congestion prediction and rate adjustment
For each predetermined time period, repeat the following steps at sink node:
Compute the possibility of congestion occurrence at the sink node as shown in Section 2.1.
The new outgoing data rate of each child node “i” of sink is calculated from Equation (15) which will be propagated to each child node of sink.
For each predetermined time interval (other nodes):
Compute the possibility of congestion occurrence at each parent node “i” as shown in Section 2.1. The new outgoing data rate for each child node “j” is determined by its parent node “i” as follows:
The above is used by each node to determine its new outgoing data rate.
In this algorithm, for each predetermined time interval, the outgoing data rate for each sensor node is determined by its parent node only and the outgoing data rate for each parent node is in turn determined by its own parent node. Since the sink has no parent node, its allowable outgoing data rate is determined from the inverse of its service time average.
The simulation of the present work has been carried out in NS2 simulator.
4. Results and Discussions
The existing work on EWPBRC  and FEWPBRC  consider only incoming and outgoing data rate for predicting the level of congestion in each sensor node. Omitting buffer occupancy leads to buffer overflow. In the present work congestion intensity is predicted using probabilistic method based on incoming and outgoing data rate and buffer occupancy. Moreover the threshold value used for comparing buffer occupancy is dynamic, so that efficient prediction of congestion intensity and subsequent adjustment of outgoing data rate minimizes packet drops. The network model proposed also allocates resources among sensor nodes efficiently based on priority of the source data and location, thus maintaining a good throughput.
In the present work simulation as per the model mentioned in Figure 3 is carried out. The work is simulated for 30 runs with 75 seconds per run. During the process sensor nodes start sending data at the beginning of simulation run and finish it at the end of simulation run. The location priority of all sensor nodes is assumed as 1. The size of each packet is 250 bytes. The size of the buffer in every child is 75 packets and the size of buffer in sink is 125 packets. Table 2 shows the parameters used for simulation.
The traffic flow classes that each sensor node has in the simulation model are shown in Table 3. It is assumed that each sensor node should have gathered data from all the traffic flow classes IERT, HPENRT, MPENRT, and LPENRT and their assigned weight values are correspondingly 7, 5, 3 and 1. The traffic flow class priority for node “1” which includes HPENRT and LPENRT traffic flow classes is determined as 5 + 1 = 6 (the weight of HPENRT is 5 and LPENRT is 1). For node 3 which includes all traffic flow classes, the traffic flow class priority is calculated as 7 + 5 + 3 + 1 = 16 (since the weights of IERT, HPENRT, MPENRT and LPENRT are 7, 5, 3 and 1 respectively).
The performance of the present approach during 30 simulation runs is evaluated for various parameters such as throughput, loss probability, sink-to-base station delay and source-to-sink delay and is tabulated in Table 4.
The results for the present work are also correlated with already existing two congestion control approaches EWPBRC and FEWPBRC (Figures 5-8) and the comparative result statement is shown in Table 5.
Table 5 shows the average of results obtained in 30 runs for various performance metrics for the existing works EWPBRC, FEWPBRC and proposed work PQACC. From Table 5 it is clear that the average throughput achieved by PQACC is 9% higher than EWPBRC and 5.978% higher than FEWPBRC. The average sink-to- base station delay achieved by PQACC is 14.81% lower than EWPBRC and 11.51% lower than FEWPBRC. The average loss probability achieved by PQACC is 16.03% lower than EWPBRC and 11.69% lower than FEWPBRC. The average source-to-sink delay achieved by PQACC is 10.33% lower than EWPBRC and 7.05% lower than FEWPBRC.
Figure 5. Throughput to base station from sink over runs.
Figure 6. Sink-to-base station delay over runs.
Figure 7. Packet loss probabilities over runs.
Table 2. Simulation parameters.
Table 3. Status of traffic flow classes in each sensor node.
Table 4. Results in simulation runs for the present work.
Figure 8. Source to sink delay over runs.
Table 5. Comparative results.
Figure 5 shows the comparison of throughput which is the total outgoing data rate of sink node to the base station, in each run. The average throughput is 313.403 MB for EWPBRC, 322.33 MB for FEWPBRC and 341.61 for the PQACC. From the results it is observed that PQACC is good compared with EWPBRC and FEWBPRC. The increase in average throughput is due to minimization of packet drops with efficient congestion prediction and effective allocation of network resources among sensor nodes.
Figure 6 shows the comparison of sink-to-base station delay of EWPBRC, FEWBPRC and PQACC. Sink- to-base station delay is the time taken by sink node for transmission of data to the base station. The average sink-to-base station delay is 0.002119 seconds for EWPBRC, 0.00204 seconds for FEWPBRC and 0.001805 seconds for the PQACC. The decrease in delay time is due to minimization of retransmissions by packet drops depicting the faster delivery of data packets.
Figure 7 shows the comparison of packet loss probability for all three approaches. Packet loss probability is the ratio between the numbers of packets dropped in the network to number of packets transmitted by the sensor nodes. The average packet loss probability is 14.03% for EWPBRC, 13.34% for FEWPBRC and for PQACC it is 11.78%. PQACC has the minimum percentage of packet loss probability. The decrease in packet loss probability is due to efficient prediction and adjustment of outgoing data rate of sensor nodes.
Figure 8 shows the relationship between source to sink delay and run. It is the time taken by each child node to forward data to the sink, in each run for EWPBRC, FEWPBRC and PQACC. The average values of source to sink delay are 0.34242 seconds, 0.33034 seconds and 0.30703 seconds for EWPBRC, FEWPBRC and PQACC respectively. The results show that the source to sink delay is minimum for PQACC. This decrease in delay time is due to effective allocation of network resources among sensor nodes and efficient determination of outgoing data rate for sensor nodes.
The protocol PQACC, for congestion control is developed by considering the requirements of multimedia applications on WMSNs. To achieve QOS for various types of data, each traffic type is set with a priority. The outgoing data rate of each source is adjusted based on the priority of data, predicted level of congestion and deployed location of the sensor nodes. The proposed approach allocates more network resources to high-priority traffic flow compared to low-priority traffic flow, so that real-time traffic gets preferential treatment over non- real-time traffic which is a requirement for WMSN. The threshold value used for comparing buffer occupancy to predict congestion intensity is dynamic, so that efficient prediction is achieved. The results from the simulation data for PQACC efficiently diminish the congestion for a better QOS. The probabilistic method used in the present study, for predicting the level of congestion gives good result compared with EWPBRC and FEWPBRC approaches. Hence an improvement is achieved in throughput, delay and packet loss of the network. This proposed approach can be employed in tracking systems and monitoring applications. In future the work can be optimized using neuro-fuzzy approach.